Back to Home Page of CD3WD Project or Back to list of CD3WD Publications

CLOSE THIS BOOKNew Energy Technology (PACE, 1990, 60 p.)
Clean energy developments
VIEW THE DOCUMENTSolar energy - The nature of natural and ''EWEC'' solar collectors
VIEW THE DOCUMENTThe electro-resonance generator
VIEW THE DOCUMENTAdvances with Viktor Schauberger's implosion system
VIEW THE DOCUMENTPlastic engine technology
VIEW THE DOCUMENTA novel D.C. motor
VIEW THE DOCUMENTPermanent magnet motors - A brief overview
VIEW THE DOCUMENTAneutronic energy - Search for nonradioactive nonproliferating nuclear power
VIEW THE DOCUMENTLUMELOID* solar plastic film and LEPCON* submicron dipolar antennae on glass
VIEW THE DOCUMENTCharged aerosol air purifiers for the suppression of acid rain
VIEW THE DOCUMENTClean engines - A combination of advanced materials and a new engine design
VIEW THE DOCUMENTDevelopments on the flexible mirror

New Energy Technology (PACE, 1990, 60 p.)

Clean energy developments

Solar energy - The nature of natural and ''EWEC'' solar collectors

Philip S. Callahan
2016 N. W. 27th Street
GAINESVILLE, Florida 32605
United States of America

The physical symbol for energy is E, and is defined as that property of a system that is a measure of its capacity for doing work. It is hardly necessary to point out that every farm crop, grassland, forest and creature there is, plus all the seas of the world, depend on the sun for energy. It would be impossible to even begin to calculate a value for E that represents the amount of work in joules that spreads across the face of the Earth each second of each day. Imagine, for instance, the amount of work accomplished each day by each tree, each insect, each monkey, etc. etc., being multiplied by the thousands of different species, and the millions, or billions upon billions, of these "working" individuals. Despite this example of the power of the Sun, I was constantly informed by energy "experts" that there is not enough energy from the Sun to run the miserly few electrical systems in my home.

As the Japanese haiku poem by the gracious poet Shiko states:

"Through scarlet maple leaves. the western rays
Have set the finches' flitting wings ablaze."

The poets eyes, of course, were the collectors of that flitting scarlet blaze, and the eye is composed of a lens and numerous spines, or rod and cones if you prefer.

In this paper I use the word spine to designate any natural object that takes the form of an elongated lens. In other words, a dielectric material (insulative substance) that geometrically may be considered as a lens that is taken at its center point and pulled out into an elongated rod, cone, pyramid, etc., such as helical cones on the legs of some mites (Figure 1).


Figure 1. An insect helical, dielectric waveguide receiving antenna, called spine or sensilla responding to specific infrared wavelengths.

Spines on living systems are often set in complex patterns somewhat as in the case of manmade antennae (Figure 2).


Figure 2. Scanning electron photograph of tapered sensilla.

The amount of energy from the Sun impinging on collectors depends on the angle of the Sun on the horizon, the presence of water vapour and other atmospheric constituents and atmospheric radiance itself (Figure 3).


Figure 3. Spectrum of absorbed Solar energy as a factor of atmospheric radiance and the zenith angle of the Sun on the horizon.

Solar cells presently have a conversion efficiency of about 15%. Thus, if 1,000 watts of sunlight fall upon one square meter of solar cell material, the resulting DC output is 150 watts.

The EWEC promises some major advantages over solar cells. Early calculations show that EWEC may have a theoretical conversion efficiency rate of 50 - 70%. Mechanical flexibility appears inherent in the EWEC if the absorber elements are mounted on a flexible substrate, while solar cells often crack and lose efficiency if not mounted on a sheet-roll process - a feat of economy not presently achieved in solar cells.

The biggest problem is not in miniaturizing the small collector spines, but rather in coupling into the collected infrared energy, and converting it to electricity. From my observations of insects and plants, there is no doubt this can be done by copying nature!

My experiments indicate the hairs on the leaves of plants are really dielectric waveguide-antennae for collecting energy in the form of infrared or microwave signals from the Sun. Bailey and I have excellent models in plants and insects, for our solar energy research. (The Sun has millions of narrowband radiating emissions in all portions of the spectrum.)

I am suspicious of a researcher who is not infatuated with the elegance of nature, spines being a part of that elegance. Electromagnetic waves are as much a part of nature as are the insects and the plants themselves. I am disappointed in Hertz, for whom the electromagnetic waves are named. He was not alone in the discovery of radio waves - Hughes deserves equal recognition - and he once wrote to the Dresden Chamber of Commerce that all research into the characteristics of electromagnetic waves should be discouraged, because they could not be utilized to any good purpose. So be it!

A working model

The Sun gives off tremendous amounts of radiation in the 3 to 10 cm region. Lately, I have been utilizing a Gunn diode to generate 3 cm radiation and modeling wax-coated dielectric spines to test their efficiency as open resonator collectors of specific radiation bands (Figs. 4 & 5). I have demonstrated that long tapered spines, as occur on insect antennae, are directional (Fig. 4) whereas short curved spines, as occur on many plants (called trichomes) are omni-directional (Fig. 5). They collect and amplify in a circular (omni-directional) pattern. Such dielectric open resonators increase by focusing radiation (or amplify) up to 4 times, e.g. from a baseline of 10 micro A to between 30 to 45 micro A (Figs. 4 & 5). As pointed out in our work, an array of such spines or cones, matched to a solar cell, would increase its efficiency from 15 or 20% to 70 or 80%, well within practical economic utilization.


Figures 4 and 5. Polar plots of long, smooth tapered and short steeply bent sensilla. Long sensillum is highly efficient directional end aperature antenna and has 4x amplification. Short one is omnidirectional and has 2x amplification.

The atmosphere does indeed complicate the study of solar radiation considered as an energy source. The fact that the transmission of solar energy through the atmosphere is a complex problem should not be used, as it often as is, to proclaim resonant solar collector systems impractical.

As I pointed out in my book "Tuning in to Nature" (1), the candle in many ways resembles the Sun. It gives off numerous narrow band visible and infrared emissions. If we tune to a candle from across a few yards of space, we should be able to tune to the Sun across a few million miles of space. To accomplish tuning in to the sun, we will without a doubt have to consider doing it with micrometer-long spines, or pits, in the same manner that plants and insects accomplish this.

Spinelike projections, considered as waveguides for collecting energy, have been thought about by other researchers. There are numerous historical examples of researchers in various parts of the world having the same idea at the same time. The best example is the simultaneous American and Soviet invention of the laser. My own work is also a good example of two researchers working on different projects, but with the same theoretical viewpoint: both of us were doing research on the same campus at the University of Florida, totally unaware of one another! ,

Professor Robert Bailey, of the Electrical Engineering Department, had designed and produced a metal model of what he calls an Electromagnetic Wave Energy Converter (EWEC). In 1973, he obtained a patent through NASA for his EWEC.

The drawing from his patent, which is a design for one that works in the microwave portion of the spectrum, is very similar to the tapered insect dielectric spines. When Professor Bailey was told about my work by another researcher who had heard my seminar before a meeting of electrical engineers, he immediately recognized the similarities of the problems involved in our respective research fields. He crossed the campus - a distance of approximately one mile - to tell me about his work. As fate would have it, we were slightly closer together than were the American and Soviet developers of the laser.

The EWEC was designed for the specific purpose of collecting the Sun's electromagnetic energy and converting it directly into electricity for domestic use.

Professor Bailey believes, as do 1, that to efficiently tune into the Sun we must miniaturize his EWEC down to visible and infrared wavelengths where the greater portion of the Sun's energy lies. This means utilizing dielectric materials, not metals, as dielectrics are the best resonators at these short waves. We must design the EWEC as a dielectric antenna-collector of radiation.

The current chief solar energy conversion technique is the silicon solar cell, an established technology created for the space program. Sunlight impinging upon these smooth-surfaced cells causes electrons to flow, and thereby creates direct current.

The EWEC Converter

In essence, Professor Bailey's patent consists of a series of dielectric absorbers of solar radiation (Fig. 6). As he points out, there are three main considerations for such a solar collector modeled after natural insect and plant spine-like absorbers and focusers.


Figure 6. The EWEC (Electromagnetic Wave Energy Converter) US Patent Number 3,760,257 issued to Prof. Robert L. Bailey on September 18, 1973. It consists of a series of dielectric absorbers.

1. They are one of the principal elements of the invention. Without them, the invention is useless.

2. The least is known about such absorbers of any element used in the invention. Considerably more precise scientific and engineering knowledge is needed before practical converters could be designed with assurance they would work.

3. Knowledge learned about absorbers probably will have useful spinoffs to other areas, e.g. solar thermal absorbers for terrestrial and space use. There is some preliminary indication that better solar thermal absorbers could be made by not just "roughing up" the surface as is now done, but by causing the surface to have a coating of uniform height cones appropriately arrayed.

4. An additional spinoff from this research may accrue to the agricultural area of insect control. The understanding of absorbers from this research will have direct application to insect antennae and may lead to super-efficient electronic means of trapping insects, preventing crop and food destruction.

Information about the Electromagnetic Wave Energy Converter (EWEC) and its stage of development can be found in Professor Bailey's US Patent No. 3760257. Sept. 18, 1973 NASA. It is hoped other researchers will continue the development of this elegant concept.

References

1. Callahan, Philip S.. Tuning in to Nature. Devin Adair. Greenwich. 1975.

2. Callahan, Philip S.. Dielectric waveguide modeling at 3.0 cm of the antenna sensilla of the lovebug, Plecia neartica Hardy. Applied Optics. 24, April 15, 1985. p. 1094-1097

3. Engineering and Industrial Experiment Station, University of Florida, Gainesville. Final Report: Electromagnetic Wave Energy Conversion Research to NASA Goddard Space Flight Center, Greenbelt MD. UF Project 2451 -E43. September 30, 1975. 49p.

The electro-resonance generator

Leon R. Dragone
P.O. Box 175
SPRINGFIELD, Massachusetts 01101
United States of America

The basic equation governing the operation of this device is:

(1)
where:

L is the inductance of the coil,

Rt is the total resistance including the coil's, load's and battery's,

Eb is the battery voltage,

C is the total capacitance in the circuit, parasitic and otherwise,

Ep is the voltage across the arc.

Now it is an established fact that an arc has a negative dynamic resistance. Much empirical work on the electrical characteristics of the arcs was carried out during the period 1920-40. One typical voltage current relation for an arc is:

Using this, we can write (1) as:

(2)

It is clear that the power consumed as Joule heating of the resistances in the circuit is P = RtI² and the electrical energy consumed as heat during the time is:

Now let us differentiate (2) with respect to time:

(3)

This can be put into the form (set n = 1 for simplicity)

(4)

Now:

is the equation of an undamped oscillator and the term is the damping term. It is clear that can be positive (dissipative), zero (superconductive), or negative. If negative, the second term acts to produce a negative Joule heating effect; i.e. heat is absorbed from the surroundings by the circuit, and produces a spontaneous increase of the current in the circuit. Clearly, this is in violation of the Second Law of Thermodynamics. When I increases to the point where

the oscillations become damped out. The decaying exponential is actually observed on the scope during operation of our device and we can write:

(c = constant)

as an empirical equation of this dissipative part of our current oscillations. During these decaying oscillations, the overall resistance in the circuit acts to damp out the current. This is a conventional and well-understood phenomenon - there is no free energy production in this mode.

The new or anomalous behaviour occurs when for then the so-called dissipative term clearly acts to sustain and build the amplitude of the current oscillations. (In this part of our analysis, only positive currents are used; i.e., I > 0). This increase in current amplitude is due to the negative resistance Rt - B/I²<0, and since we can identify resistance times current squared as a negative Joule heating effect; i.e., instead of electrical energy as heat, we are absorbing heat and converting it into electrical energy! To find the power dissipated, we write:

P = (Rt - B/I²) I²

It is clear that if the coefficient of I² is negative and so the power dissipated is negative. This means that the circuit is suffering a Joule cooling effect, and adding electrical power to the system. This electrical power is evidenced by an increasing amplitude of the current (an experimentally observed fact) and by absorbing heat from the surroundings. The heat energy produced in the circuit in time , is:

Heat = Q =

and when the second term predominates over the first, i.e., when I is small,

Heat = Q < 0

which shows that the circuit is producing negative heat; i.e., it is cooling or taking in heat.

This heat is converted into an increase in amplitude of the current, i.e., into higher grade electrical energy.

This theoretical description of the system fits well into our experimental observations and can act as a guideline for further improvements. However, the system seems to demonstrate other anomalies. For example, while one part of the circuit shows negative current flow, another part simultaneously shows a positive current flow. Also, there is strong radiative coupling between various parts of the system and the surroundings. Further, two types of arcs have been experimentally discerned: the cool arc, white in colour, and the anomalous current is associated with this arc; the hot arc, blue in colour, no oscillation or anomalous current occurs with this arc.

Hence the identification of

as the fundamental voltage current relation for our arc may be premature. However, the basic idea of a negative resistance (dynamic or otherwise) is fundamental to this system. These concepts are by no means new and the literature contains much on this topic. For example, Van der Pol's equation which governs the current flow in a triode is:

(5)

Here the second term can be positive, zero, or negative leading to damped, undamped, or building oscillatory solutions. We can calculate the negative or free energy directly from (5). The second, or damping term is:

If we integrate this term with respect to time, we get the voltage:

This is the voltage across the circuit element which gives rise to the damping term. The energy delivered into this element in time is:

Clearly if I > 0 is small, Ei n < 0 is negative, showing that the circuit element is putting out electrical energy - free energy. This calculation is an example of non-dynamic negative resistance and only shows why it is so important to discern the correct voltage current relation for our arc. The measure of the free energy produced depends on a knowledge of this relation. More research is needed to get the correct empirical relation between voltage and current for our arc.


Electric arc

Advances with Viktor Schauberger's implosion system

Borge Frokjaer-Jensen
Danish Institute of Ecological Techniques
Ellebuen 21
2950 VEDBAEK
Denmark

This following confines the presentation at the 1st International Symposium on Non-Conventional Energy Technology held in Toronto in 1981. At that time I emphasized the implosion theory and some advances made by the Austrian scientist Viktor Schauberger. Now I am focusing on one of his designs - a water cleansing device, which has since been successfully developed.

When Viktor Schauberger died in 1958, at the age of 73, it was after a bitter fight against his contemporary scientific community. His ideas about the structure and function of nature were seen as controversial and difficult to comprehend. He advanced new ideas and theories about the essence of water, about plants, about agriculture and forestry and about the energy supply. Yet his ideas did not die with him; interest in his theories is spreading.

Literature about Schauberger's inventions and theories is now available. In my opinion, the best guide is Bertil Gustavsson's book "Connections between the Implosion Theory and the classic sciences". Unfortunately, this book is in Swedish only. Another book, in German, is "Die Geniale Bewegungskraft" (The genious force of moving), by Schauberger's friend and collaborator Aloys Kokaly, who is also editor of "Kosmische Evolution". This periodical gives firsthand information of natural biological processes observed by Schauberger. Still another recommended book is written by his friend, Olof Alexandersson: "Living Water", Turnstone Press, 1982. Finally, the German periodical "Mensch und Technik - Naturgemass", has a special issue on Viktor and Walter Schauberger (issue H4, 1982, written by Harthun, Fischer, Neumann and Wieseke).

Schauberger noted that everything in nature has bipolarity: light and darkness; warm and cold; male and female; pressure and draw, etc.. Between these opposites exists a constant flow of elevating qualities running from the lower to the higher one in a logarithmically spiralling movement. There are two spiralling movements: one outward-going sucking energy away from the centrum in excentric centrifugence, and one inward-going which presses energy in a concentric centripetence towards the condensed centrum.


Movements


Natural modes of motions

The core philosophy of Schauberger could be expressed as follows: The force is life, and the secret of life is bipolarity. Without opposite poles in nature, there is no attraction and repulsion, there is no movement. Without movement, there is no life.

Each movement has its characteristic qualities: The outgoing movement is followed by an increased friction and increased pressure with a rise of temperature, biological breakdown and decomposition. It is characterized by the processes of explosion, expansion and gravitation. The in-going movement is followed by a decreased friction, decreased pressure combined with decreased temperature and biological improvement. The corresponding processes are implosion, impansion and levitation.

Even when the two movements are naturally balanced as two opposing forces of equal strength, the inwards-going concentric spiral dominates in its reconstructive role.

The spiral movement may turn either counter-clockwise or clockwise. When the motion spirals counterclockwise, it has a structuring and formative influence on material flowing (fluid or gas) in the implosion spiral. If it turns the opposite way around, its function is decomposing and disintegrative. The greatest vitalization is obtained when both spirals move inside each other. Such resultant or double concentric spiral movement is called the implosion spiral and constitutes the essence of Schauberger's implosion theory.

Schauberger used this theory to explain certain natural phenomena such as typhoons, whirlpools, certain shell forms, the meandering of waters, and new forms of energy generators (including his levitating implosion disk).

The water cleansing device

The water-cleansing device is based on the implosion process. Figure 2 shows the concept presented in the 1981 Toronto symposium. Water runs from the left into an egg-shaped copper bowl through small nozzles in a spiralling copper tube. The nozzles rotate the water beams and the spiral-wrenched copper tube causes the whole amount of water to start a spiralling whirl, which spins in the bowl and down through the outlet tube. Experiments showed that impurities such as iron fillings were pressed together into small balls in the center of the tube by the impansion force. Figure 3 shows the copper spiral tube used in the prototypes. Unfortunately, this type of spiral tube could not give sufficient water velocity in the vortex.


Figure 3.

Figure 4 shows a similar product which is made of copper. This is dangerous if such an appliance is operating continuously. Impurities in water may combine with copper and develop toxic byproducts.


Figure 4.

Figure 5 shows the current water cleansing device, now in production. One of the participants in our research group is Irma Hoyrup. She visualized a golden pear-shaped water-cleansing device viewed externally as well as internally. In our device, water is sprayed into a golden pear-shaped bowl, and forms a counter-clockwise spiralling rotation, the implosion movement, around an electrode in a conical golden tube. This water undergoes the changes pointed out by Schauberger: namely a reduced temperature and an increased vitality. Treated water is still spiralling when it leaves the device, as can be shown in photography at shutter speeds of 1/1000 of second.


Figure 5.

The mechanical energy for the vitalizing process is supplied by the water tap pressure. The normal working range is a pressure of 3-7 bar, which corresponds approximately to 15-40 litres of water per minute, an advantage over other water-cleansing devices on the market. In the centre of the golden bowl there is an electrode made of layers of copper, silver and gold. This electrode is soldered to the bowl in order to give a good electric connection. The electrical energy for the cleansing function is directly tapped from the whirling water. The faster the implosion whirl is, the stronger is the electric current through the spiralling water in the implosion centre around the structuring electrode. Measurements carried out on water containing average pollution levels show a 140 mV potential between the negative copper/silver/gold electrode and the positive copper/gold bowl. The electrical potential depends on the rotational speed in the spiral and the current depends on the degree of pollution, reaching as high as a few milliampere in averagely polluted water. Bacteria and virus are effectively killed with this arrangement. Moreover, all the inner surfaces of the device are gold-plated, and therefore are chemically inert. The silver electrode's surface is treated to maximize the active area. As the water turbulence is broken up by the granulated silver surface, microscopic air bubbles are formed in the implosion spiral. This mechanical treatment causes the lime in the water to decompose into carbon dioxide and released in the air, and calcium carbonate, which crystallizes into aragonite crystals, which do not precipitate as lime.

Finally, the water-cleansing and water-vitalizing function depends on a special interaction between pure materials: gold, silver and copper, which are the constituent parts of the device. Such interaction between the gold and copper on the surface of a golden copper plate apparently was known by alchemists. Irma Hoyrup informed us about this layering process and the technique of how to properly fashion the golden copper bowl.

Test results

This device has no filter, no reverse osmosis membrane and there are no magnets attached. Water is pouring out at a rate of up to 50 litres per minute. But does it cleanse the water? In Denmark, traditional chemical and biological analyses are made by Stein's Laboratories, authorized by the Danish government. Figure 7 shows the certificate of an analysis of polluted lake water before and after passing through the device.


Certificate of an analysis.

The main function of the device is to kill bacteria. Colon and faeces bacillus are reduced to 15% of the originally very high numbers. Several biologists have confirmed that these results are unique, especially since there is no filter or semipermeable membrane in the device. Furthermore, the pH-value is increased from 7.3 to 7.5, or greater alkalinity.

Vitalised water

Viktor Schauberger stated that water could be "dead" or "alive". Water becomes dead after passing through kilometers of straight iron pipes. Such water loses biological effects and does not add support to living organisms. Living water may be found in wells and in unpolluted rivers with meandering flows. And, water spiralled through an implosion whirl is biologically active.

Table 1

Extract of the results carried out by Stein's Laboratory with the improved Schauberger-type of water cleansing device

Reduction of minerals:

Permanganate, calcium, sodium, bicarbonate, sulphate, nitrate, nitrite and phosphorous (These reductions are minor)

Reduction of bacteriological counts:

Colon bacillus:

542 to 79 per 100 ml

Faeces bacillus:

542 to 79 per 100 ml

Number of germs:

7200 to 4500 per ml

Fluorescent germs:

400 to 100 per ml

pH value:

7.3 to 7.5

There exist several methods of measuring biological activity. More than 100 Kirlian photographs have been used by us to evaluate water before and after passing through the device. In Figure 8, the photo to the left shows the pretreated tap water.


Pretreated tap water.

The treated water has at least three important differences:

1) The blue belt of light around the treated water is broader than around the untreated water;
2) the sparks penetrating the blue belt of light are more indistinct or fuzzy in the tap water; and
3) the sparks are measured to be approximately 25% longer around the treated water

The little red circle in these photos indicates where the lead from the high-voltage source has touched the drop of water placed on the back of the film. The exposure is made with 10 short single sparks of 15 kilovolts from a piezoelectric crystal. When specialists on Kirlian photography are requested to interpret these results, they concur that the water has improved its biological quality. Such water appears to have been structured as the one treated by the method employed by Dr. Marcel Vogel.

The water has been tested by the Helix Centre for Liquid Flow Research in Odense. This research centre applies non-conventional methods, such as those developed by Dr. Rudolph Steiner and kinesiological muscle tests, which have been used in the following analyses. In Table 2, the left column indicates the ideal rating. Then comes the analysis with tap water, followed by reverse osmosis water and our device's. Water treated by our device parallels the ideal case rating.

Table 2

Comparative analysis of water for vitality,

Helix Centre, Odense
(June 15, 1988 data)


Ideal

Tap

R O

Device

COLD TAP WATER

Parathyroid

+

+

+

+

Overstress

+

/

+

+

Allergy

/

+

/

/

Thymus

100/100

/

94/100

100/100

BOILED WATER FROM COLD TAP WATER SOURCE

Parathyroid

+

/

+

+

Overstress

+

/

+

+

Allergy

/

+

/

/

Thymus

100/100

/

92/100

100/100

HOT TAP WATER

Parathyroid

+

+

n a

+

Overstress

+

/

n a

/

Allergy

/

+

n a

/

Thymus

100/100

/

n a

10/100

The device appears to perform better than a reverse osmosis aggregate, which is the best commercial competitor. An analysis of hot tap water indicates that such water is so spoiled that it is impossible to regenerate it, even with our device.

Sprouting tests

A good way of testing biological vitality of water is to grow seeds in it. Different seed varieties have been used and the overall observation is that the sprouts from the seeds grown in treated water are somewhat stronger and thicker than sprouts grown in tap water. In Figure 9, the left photo shows beans grown five days in tap water. Notice the mould growths. Smell of putrefaction was pronounced. Beans grown in treated water have sprouts that are stronger and longer. There is no sign of mould and no bad smell.


Beans grown.

In treated water the seeds consume about 20% less water than when grown in tap water. These results are measured after 20 hours, at statistically significant levels (99.9%). It is possible that the seeds require less of the biologically active water to grow.

In a "Future Technology" gathering in Berlin, Dr. Hacheney from Dortmund, while presenting a paper on Kirlian photography of water, said: "We are living in a community where it is necessary to have a centralized supply of water, even though we know that the quality of tap water is not good for drinking. Therefore, we are forced to develop a device for water improvement which can be installed in the water supply at the consumers' end". Preliminary tests show that we really have got something special. Furthermore, it gives an excellent reduction of microorganisms.

Plastic engine technology

Covert Harris
PETCO
KINGSTON, Ontario
Canada

This paper will take a look at a short history of PETCO (Plastic Energy Technology Corporation), a leading firm in the development of plastic engines, as well as a quick look at the many advantages of portable plastic engines over the conventional two-cycle engines that are presently used in 5 H.P. and down applications.

PETCO was founded in July 1985 by its first president, Gerry McKendry, along with senior vice-president and director of engineering, Lee Lilley. Lilley, the technical mastermind of PETCO, used to design and drive race cars, thus has a great natural understanding and appreciation for the importance of producing light parts, as lighter means faster, which is the bottom line for car racing. Because of this, he first started experimenting with injection-moulded thermoplastics, and was able to move up to seventh place overall in the Formula I Grand Prix circuit. Lilley later designed an engine that runs 200 degrees cooler than conventional two-cycle engines. With the increasing thermodynamics, however, the temperature of the engine may become less of a factor even though plastic engines run cooler than conventional ones.

When PETCO first came into existence, and after building its first successful prototype in May of 1986, they established a plant in Kingston. The facilities were quite small, only 33,000 square feet, with an ability to produce 250,000 units annually. Even with the acquisition of a plastics company, which greatly improved self-sufficiency, PETCO was limited in its potential due to the small size of the plant. However, in the fall of 1988, a new 157,000 square ft. plant was opened with a capability for producing 2 million units per year.

Figure 1 shows the basic design of a small working plastic engine. The connecting rod, crankcase, the gearcases, everything but the cylinder, piston and crankshaft, are plastic. Since this design, the crankshaft is made of plastic as well. The standard two-cycle engine consists of two strokes, where the piston goes up and compresses the gases, the gases are burned, the piston descends, forcing out the burnt gas, and goes up bringing in fresh gas, and so on. Most of these engines bring in the gas on one side of the cylinder, about midway, and expel the gas out on the other side, again about midway. This makes it inevitable that they will mix to some degree, so it is impossible to keep the gas completely pure. Lilley instead decided to run the gas into the crankcase below, timed, and thus synchronized by a rotary poppet valve, ran into the crankshaft through a series of holes, so the air from the outside would actually cool the crankshaft from 300 degrees.


Plastic engine.

The holes help atomize the gas even further as well, so the gas would then be even easier to burn than in a conventional engine. From the poppet valve in the head, fuel, as well as incoming air, are sprayed directly at the bottom of the sparkplug, generally the hottest part of the engine. That squirting of the air actually cools the engine on top, without negatively affecting its operation efficiency. With the cool gases drifting towards the bottom they push the spent gases out the end holes. This expels almost all of the burnt gas before it can contaminate the fresh gas.

Efficient, cleaner

Due to this greater purity, the gas can also burn more efficiently. Instead of burning at about 50% in conventional internal combustion engines it burns at 95% efficiency. This, in turn, creates carbon dioxide instead of carbon monoxide, so it is not only environmentally cleaner, but utilizes the fuel more thoroughly.

More advantages with plastic include being able to hold tighter tolerances than with die-cast metal parts, as the plastic doesn't have to be machined nor ground, which is not as consistent in substance as injection-moulding, as is done with plastic used in engine parts. At this point, we can make engines up to 70% plastic, and we are aiming for a much higher percentage in the future. This allows manufacturers of products that use light engines, such as lawnmowers and chainsaws, to not have to design around the engine since the plastic is more reliable.

Heat insulation also doesn't take its toll on plastic engines the way it does on the standard internal combustion two-cycle engines. The plastic doesn't crack as easily, and the plastic won't suffer from structural fatigue. Noise is also better contained within the plastic, unlike metal which tends to transmit, and even amplify it.

On the question of lifetime, our first prototype at PETCO lasted for more than 6,000 hours before we dismantled it. And even then, it was not dismantled due to deterioration, but because of obsolescence. The standard two-cycle engine in the 2 to 5 H.P. range will rarely last for longer than 300 hours.

There are generally three different plastics used by PETCO in building our engines, and these are normally supplied to us by I.C.I. (International Chemicals, Inc), as well as from our wholly-owned subsidiary, Paragon Plastics (1983) Limited. The main one of these plastics is known as Feutron, a compound of reinforcing glass-fiber with polyetherimide thermoplastic resin, is actually 50% glass, in 10 mm glass fibers, which gives this plastic great strength, and is excellent against vibrations, as well as being heat-resistant to 257 degrees Celsius. There are also higher resistance factors to fuels and fuel mixtures.

So the definite advantages of plastic engines and the practicality of their applications make this a wide open field for greater advancement for decades to come. At the present time, it has been restricted to usage for portable motors rarely more than 20 H.P., and for the most part, no more than 5 H.P. However, as plastic engines become more even more energy and cost-efficient, large applications will undoubtably be explored.

A novel D.C. motor

Gareth Jones
Talcen Eiddew, Carreglefn
AMLWCH, LL68 OPW
United Kingdom

This paper describes the design of a novel electric motor with unique performance characteristics. The motor was designed originally as a scientific experiment to verify certain conclusions following studies into the behaviour and properties of magnetic fields and therefore the approach to electric motor design is, to say the least, unorthodox, however, it is believed that one would not arrive at this very successful design if the designer adhered rigidly to the tenets of electromagnetic theory.

Indeed such is the present state of knowledge and understanding of magnetic fields that all too often an electromagnetic machine, designed in accordance with accepted theory, fails to live up to its expectations in practice and this is such a common occurrence that electric motor manufacturers show a marked reluctance to develop radically new machines and prefer instead to innovate the tried and tested designs, some of which are over a century old.

This paper also describes how existing doubly excited machines such as alternators and synchronous motors can be converted easily and cheaply to DC motors with a performance at least comparable with their conventional counterparts. Incredibly, therefore, electric machine manufacturers have been making alternators and DC motors for over a century not knowing that one machine, the alternator, the cheapest and easiest to make, could fulfill both roles. However, it should also be borne in mind that a new theory does not make a new design possible, it has always been possible to design the motor described in this paper, it was only theoretical predictions that prevented it from happening sooner.

Overview of the Electric Motor

Electric motors are energy converters converting electrical energy to mechanical energy in two stages, first, electrical energy is converted into magnetic energy and secondly the magnetic energy is converted into mechanical energy. There are numerous ways in which these conversions can be achieved and subsequently there are numerous designs of electric motors.

All known electric motors have one property in common, which is that mechanical energy is produced from the interaction between two magnetic fields, a primary field and a secondary field. It is not always obvious that two magnetic fields are involved but closer examination will invariably prove this to be the case. Cases of doubt such as reluctance motors, for example, can often be resolved by considering a coil wound around a component and establishing whether or not an e.m.f. will be induced in the coil.

The primary or main field is produced from a winding connected either directly to the supply or through switches such as brushes or solid state devices. The secondary field can be produced in three different ways:

a) by induction from the main magnetic field;
b) by having a second winding connected to the supply;
c) from a permanent magnet.

Electric motors are often categorised by the method used to produce the fields, motors in which the secondary field is induced from the primary field, for example, are known as induction motors and those that produce a second magnetic from a secondary winding or a permanent magnet are called doubly excited motors.

To design an electric motor from first principles we need to consider three activities:

1) producing the main or primary magnetic field,
2) producing the secondary magnetic field,
3) producing a mechanical torque from the interaction between the two fields.

The Primary Field

To provide a continous rotating torque, the current in the main field winding of every multi-pole electric motor must be interrupted or reversed at the end of each half cycle, and of course with a.c. supplies this is achieved automatically, but in DC machines the current has to be switched or commutated and this can be accomplished either by mechanical switches such as brushes and commutator, or by electronic switches such as: thyristors and transistors.

At first glance, therefore, a primary field connected to an AC supply would appear to be the "best" main field winding as this would obviate the need for switching devices, and for most domestic and industrial applications this is true, but many applications require a variable speed over a wide range, but since the frequency of the supply is fixed this is not easily achieved with AC motors, and therefore an AC motor has to run at or near-synchronous speed. In general AC motors are constant speed low torque machines.

To produce a variable speed drive with full torque throughout the speed range, the main _ winding of an electric motor has to be switched when the two fields are in a predetermined position relative to one another, and if we take an extreme case where a high continuous torque is required under stall conditions, then clearly a DC motor offers the best solution.

From the performance viewpoint, conventional doubly excited brushed DC motors are probably the "best" all-round motors since the speed can be varied over a wide range, or, if the application requires, the speed can be maintained constant within fine limits over the full power range by providing incremental control of the field current. By connecting the field windings in series or in parallel or in a mixture of the two, called a compound connection, a very wide range of operating characteristics can be obtained, and, certainly of all the hitherto known motor designs, offers the greatest variety of performance characteristics as well as the highest torque from a given volume.

With so many advantages in its favour, and the ease and the ease and low cost of converting AC supplies to DC, one would expect conventional DC motors to be the most popular drive for both domestic and industrial applications, but, in fact, this is not the case because conventional DC motors suffer from very serious design limitations, which make them expensive to manufacture compared to an AC induction motor, and also the brushes and commutator need regular maintenance, which add to operating costs and decrease reliability.

The Specification

From the performance viewpoint the conventional brushed DC motor is the motor to emulate, and the specification for this design is a motor that:

a) will perform as well if not better than a conventional brushed DC motor,
b) is cheaper to manufacture than conventional DC motors,
c) is relatively maintenance-free.

To make the machine cheaper to manufacture and reduce maintenance, it is desirable to have the main field on the stator for ease of access to the power supply, otherwise, with the main field on the rotor, brushes will be needed to carry the full load power, and, as well as taking up space, these components require regular maintenance.

To provide continuous rotation and to avoid large variations in torque, it is desirable to have a rotating magnetic field on the stator, which Is achieved by switching the stator windings in sequence as in AC motors, and the most efficient way of achieving the effect is with three phase windings. Three phase windings are standard industrial windings, and can also be easily wound on automatic weapons which further reduce production costs.

For a motor operating on a DC supply, the secondary field cannot be induced, and besides, having a separate secondary field allows incremental changes in performance, which is a useful characteristic to have in many applications, and therefore, a separately excited secondary field can be desirable although brushes and slip-rings would be required. Alternatively, the secondary field can be produced from permanent magnets fixed to the rotor. In this design, therefore, the secondary field can be produced by a wound field or permanent magnets to suit the application.

Construction of the Jones Motor

The basic construction of the motor is now described and a typical example is illustrated in Figure 1.


Jones motor.

The stator is fabricated from steel laminations having semi-enclosed slots to accommodate windings, and is wound with three phase windings connected in a wye configuration. The manufacture and cost of the stator is therefore the same as that of a three phase AC motor.

The rotor magnetic field can be produced from either a winding connected to the supply through brushes and slip-rings as in rotating field alternators, or from permanent magnets attached to the rotor as shown in Figure 1.

Hitherto, it has not been possible to commutate large windings such as are found on AC motors because of the very high-induced e.m.f.'s involved. When the current passing through a coil is interrupted, a high voltage is induced in the coil, a phenomenon put to good use in boiler igniters and automobile ignition systems, but in electric motors, generating a spark often leads to catastrophic failure by flashover in brushed DC motors and rupture in solid state devices.

Commutation is effected in brushed DC motors by short-circuiting the segments to which the coil is connected, and this produces the desired rapid decrease in current and flux in the coil. Unfortunately, in the case of DC motors, this rapid change in magnetic field produces a very undesirable large e.m.f. in the coil proportional to the rate of change of current, and if commutation is not completed by the time the short circuit is removed, then sparking will occur with consequential damage to the brushes and commutator.

It has been established that sparkless commutation cannot be achieved unless the inductive e.m.f. is limited to about 10 volts per coil and the mean voltage between commutator segments does not exceed about 20 volts. Clearly, these factors impose a severe design limitation on the design of conventional DC motors because even without a factor of safety, a 240v DC motor would require a minimum of 24 segments on the commutator, and since each segment has to carry the full load current, the magnitude of the problem can be appreciated.

These design restrictions are largely overcome in conventional DC motors by producing the main field from several small coils connected to several commutator segments, rather than a few large coils, and in that way, the induced e.m.f. per coil is reduced to a safe value. This solution, however, determines the characteristic rotating armature design of conventional DC motors having a large number of coils connected to several commutator segments, because it would be impractical to have a large number of coils on the stator, particularly for large motors.

The question, therefore, arises as to how commutation is going to be achieved in the present design.

How it Works

To answer this question the operating principle of the Jones motor will be described in simple terms with reference to Figure 2 which shows a simple rotating field two pole motor. With the rotor poles near alignment with the stator poles, the stator winding can be energized in one of two directions, either to produce stator poles which attract the rotor poles as shown in Figure 2a, or to produce stator poles which repel the rotor poles as shown in Figure 2b. The question is, "which method will produce the best motor?"


Rotating field.

From conventional concepts, the "best" method would be to energize the stator poles to attract the rotor poles, and this conclusion can be established by considering the energy stored in two coils La and Lb having mutual inductance M then the energy stored:

(1)

when
and k = coupling coefficient.

When the coils are energized to attract, the energy (Ea) stored in the circuit is:

(2)

and the mutual inductance M is a maximum when the poles are in alignment, and consequently, at the end of a cycle, when the force of attraction is a maximum and we need to commutate the windings, the stored energy in the windings is also a maximum. When the coils are energized to attract, therefore, the greater the force of attraction, the greater the energy to be dissipated.

From theoretical considerations, this is a logical conclusion, and it appears that since the electrical energy is a maximum when the coils are attracting, the mechanical energy is also a maximum, and therefore the greater the electrical energy converted, the greater the mechanical energy produced, and the "best" method must be to energize the stator poles to attract the rotor poles as shown in Figure 2a, and all classical electric motors operate on this principle.

When the coils are energized to repel, the energy stored in the circuit (Er) is:

(3)

which is less than the energy stored when the coils are attracting (Ea) by a factor of 2MIaIb.

Let us now consider the mechanical forces on the rotor. In a practical machine, the stator windings are inserted in slots around the inside diameter of the stator, and when a current-carrying conductor is placed in a magnetic field, it experiences a force in accordance with the fundamental equation:

F = BI1 (4)

From this equation and indeed from the S.I. definition of the ampere, we can deduce that the magnitude of the force acting on a current-carrying conductor in a magnetic field is the same irrespective of the direction of the current.

We therefore have an anomaly, because, as we have seen from the above equation and from the definition of the ampere, the magnitude of the force between two coils is the same whether the coils are attracting or repelling, but the amount of energy required to produce a force of attraction, from equation (2) is much greater than that required to produce a force of repulsion, from equation (3) and therefore, since the function of an electric motor is to produce a torque from the force between two magnetic fields, the "best" method now appears to be a force of repulsion as shown in Figure 2b rather than a force of attraction as shown in Figure 2a.

If the coils are identical and connected in series so that the current of both coils has the same magnitude, then it can be shown that when the coefficient of coupling is unity, the energy is required to produce a force of attraction from equation (2) simplifies to:

(5)

and the energy required to produce a force of repulsion from equation (3) simplifies to:

(6)

Another advantage of using repelling fields, is that the mutual inductance is a maximum when the force is a maximum, but this occurs at the beginning of the cycle and not at the end, and also, with repelling fields and assuming unity coefficient, the stored energy is zero, and no matter what magnitude of force we produce by increasing the current, the stored energy will remain zero.

So much for producing a force, but we still have a commutation problem. Or do we? Let us now consider the induced e.m.f.'s, and to simplify the description the magnetic flux produced as a result of current in the stator coil will henceforward be referred to as the stator flux and similarly, the flux produced as a result of current in the rotor coil or from a permanent magnet, will be referred to as the rotor flux.

With the stator coil de-energized, and the rotor flux threading the stator, the flux in the stator will be referred to as rotor flux, as it originates in the rotor.

Consider the stator winding de-energized and the rotor poles in alignment with the stator poles as shown in Figure 2c. The stator provides a low reluctance path for the rotor flux, and therefore most of the rotor flux locates in the stator.


Figure 2c+2d

When the stator winding is energized to repel the rotor, the stator flux makes the stator a high reluctance path for the rotor flux, which is expelled from the stator as shown diagrammatically in Figure 2d. However, the two fields exert a force on one another in the airgap, and an increase in stator flux is accompanied by a reduction in rotor flux, linking the stator, and similarly a decrease in stator flux causes an increase in rotor flux linking the stator winding.

This is an important concept in this invention and indeed to electromagnetic theory, since it is implied that magnetic fields do not expand to infinity as is generally believed, and also that magnetic fields are compressible. These were some of the concepts that the motor was originally designed to demonstrate.

To understand the principles involved, consider the rotor locked in the alignment position and the stator winding energized to, say, 50% current, then an increase in stator current and consequently stator flux is accompanied by an equal reduction in rotor flux linking the stator, and similarly, a decrease in stator current is accompanied by an increase in rotor flux linking the stator, because of the reduction in stator flux.

Contending fields and fluxes in the airgap

In practice, the boundary between the fields is in the airgap, and an increase in one field will compress the other, and similarly a decrease in one field will cause the other to expand. These simple concepts are, of course, contrary to electromagnetic theory, and an anathema to modern physicists, but the facts are that each phenomenon described can be proved quite easily by monitoring the e.m.f.'s induced in the windings. However, further discussion of the theory involved is beyond the scope of this paper.

Let us now consider the e.m.f. induced in the stator winding. Again consider the locked rotor position. When the stator current is increased producing more stator flux, the e.m.f. induced in the stator winding, in accordance with Lenz's Law, opposes the applied e.m.f. induced in the stator winding and therefore, acts in a direction which will tend to reduce current in the stator winding. At the same time, the increase in stator flux causes a decrease in rotor flux linking the stator and, again, from Lenz's law, the e.m.f. induced in the stator winding due to the receding rotor flux will be in a direction which will oppose the change; i.e. in a direction which will cause an increase in current.

Annulling the opposing fluxes

The induced e.m.f.'s are therefore in opposite directions, the growth of stator flux producing an opposing e.m.f. and the receding rotor flux producing an assisting e.m.f., and since the change in rotor flux linking the stator winding is caused by changes in the stator flux, the e.m.f.'s induced in the stator are equal and opposite and sum to zero, which allows the stator winding to be interrupted without causing the high inductive e.m.f.'s, which have plagued conventional DC motors for well over a century. For all intents and purposes, therefore, the stator winding of a Jones Motor presents a resistive load to the supply, and the commutation problems associated with commutating DC motors are obviated.

Figures 3 through 11 illustrate how the principle is applied to a practical machine. Figures 3, 6 and 9 show a cross-section through a two-pole motor and the current distribution around the stator with the rotor shown in different positions. In practice, the motor would be connected to the DC supply through an electronic inverter, but for descriptive purposes a mechanical inverter is shown in Figures 4, 7 and 10, which is also a practical solution. Indeed, sparkless commutation has been achieved on a 4Kw two-pole machine, with just two commutator segments and 380 volts between the segments. Figures 5, 8 and 11 show the effective connections to the supply for the different rotor positions of Figures 3, 6 and 9, with the switches omitted for clarity.


Figures 3 to 11.

A mechanical commutator has as many segments as there are field poles, and the machine illustrated has two field poles. The commutator therefore has two segments 6a and 6b in Figure 4. The segments are connected to the terminals of the supply through slip-rings and brushes, which are not shown, with segment 6a connected to the positive terminal and segment 6b to the negative terminal.

Description of the current distribution in the Jones Motor

Three brushes 7a, 7b and 7c are symmetrically distributed around the commutator, and connect the segments to the stator windings; brush 7a is connected to winding 2a, brush 7c is connected to winding 2b, and brush 7c is connected to winding 2c as shown in Figures 4, 7 and 10.

At the instant shown in Figures 3, 4 and 10, the start of winding 2a is connected via brush 7a to the positive segment 6a, the start of winding 2b is also connected to the positive segment 6a via brush 7b, and the start of winding 2c negative as shown in Figure 5.

Current distribution around the stator at this instant is shown in Figure 3. The conductors on one half of the stator carry current in one direction, and those on the other half carry current in the opposite direction so that two stator poles are formed having a North pole between slots 2 and 3, and a South pole between slots 8 and 9. The stator North pole exerts a force of repulsion in the clockwise direction on the rotor North pole, and the stator South pole exerts a force of repulsion in the clockwise direction on the rotor South pole.

As the rotor and the commutator segments advance in the clockwise direction, brush 7b disengages from segment 6a and makes contact with negative segment 6b. The start of winding 2b is now connected to negative segment 6b via brush 7b, so that at this instant the start of winding 2a is connected to the positive terminal, and the start of windings 2b and 2c connected to the negative terminal as shown in Figure 8. The current distribution around the stator at this instant is illustrated in Figure 6 and again the conductors on one half of the stator carry current in one direction, and those on the other half carry current in the opposite direction so that again two stator poles are formed, a North pole between slots 4 and 5, and a South pole between slots 10 and 11.

The stator North pole exerts a force of repulsion in the clockwise direction on the rotor North pole, and similarly the stator South pole exerts a force of repulsion in the clockwise direction on the rotor South pole, and again the rotor advances in a clockwise direction until brush 7c disengages from segment 6b and makes contact with segment 6a and the cycle is repeated as illustrated in Figures 9, 10 and 11. In this way, the stator poles produced by the three phase windings advance around the stator "pushing" the rotor as illustrated in Figures 3, 6 and 9.

From Figs.3, 6 and 9, it can also be seen that all the stator conductors are energized at any given instant just as in a conventional brushed DC motor, and again like a conventional DC motor, all the conductors are in the secondary magnetic field, and all contribute to producing a torque on the rotor. The torque per amp conductor produced by a motor operating on the Jones principle is, therefore, the same as that produced in conventional DC motors.

The mechanical commutator can be replaced with an electronic commutator or inverter, by switching a three-phase inverter in synchronism, and in phase with the rotor position and a diagram of conditions is shown in Figure 12:


Diagram

Advantages of the Jones Motor - brushless, more power, smaller

The torque per amp conductor produced by the motor is, therefore, the same as that in conventional DC motors, and in an electronically commutated version, the Jones Motor has the advantage that it does not require brushes and commutator, and therefore, a Jones motor will produce the same power from a smaller volume. The design allows the main winding or armature to be the stator component which has the advantages of more conductor diameter and lower winding resistance, and a greater surface area, which increases heat dissipation, all factors which contribute to reducing the size of motor for a given power, and therefore, a motor operating on the Jones principle is substantially smaller than a DC motor of the same rating.

The same is true in the case of conventional brushless DC motors, because in this case only two of the three phases are energized at any given instant, and therefore, the power produced is appreciably less than from a Jones Motor of the same volume, and, as was shown above, when the magnetic fields are attracting, not all of the input energy is utilised to produce a useful mechanical energy output, although the construction is similar and these motors can be operated on the Jones principle. Also, since the back e.m.f. waveform is trapezoidal, conventional brushless DC motors produce a high ripple torque which is a severe disadvantage in applications, otherwise suited to this machine, however, the ripple torque in a well-designed motor operating on the Jones principle, like its a.c. counterpart, has been found to be very low and negligible for most applications.

Performance Characteristics

Apart from presenting a resistive load to the supply, producing torque from the reaction between two opposing magnetic fields provides another unique effect. Consider the effect of an increase in field current and consequently of the rotor flux, there will then be a reduction in stator current owing to the increased back e.m.f., but the force exerted on the fields will be greater because of the additional rotor flux. The result is a decrease in stator current and the additional force will either cause an increase in speed if the torque remains constant, or an increase in torque if the speed is maintained constant. The motor, therefore, has the unique characteristic that both speed and torque are proportional to the field current.

This unique property of a motor operating on the Jones principle can be also established from equation (6) which shows that the energy stored in the circuit remains constant irrespective of changes in current, but in a conventional motor a change in current is accompanied by a change in stored energy in accordance with equation (5) and therefore, in conventional motors only a portion of the input energy is converted to produce useful mechanical energy; the remainder being stored as magnetic energy, which has to be dissipated at the end of each cycle.

The result of this phenomenon is that when the field current is increased, the input power decreases because of the increased back e.m.f., but the output power increases because of the additional force produced between the two fields, and therefore, the efficiency of a Jones Motor is proportional to the field current.

This characteristic, as far as it is known, is unique to motors operating on this principle, known as the Jones principle, and it was to demonstrate this phenomenon that the motor was originally designed. The efficiency of a motor can be increased by increasing the field current, and this phenomenon can be demonstrated until the back e.m.f. exceeds the applied e.m.f. over a portion of the cycle, and the machine acts as a generator. When this occurs, the speed decreases, reducing the back e.m.f., and the machine reverts to motor action again. This cycle occurs in a very short time interval, and to all intents and purposes, the machine speed is constant.

Designing a Jones Motor

The design and construction of a Jones motor is similar to that of a rotating field three phase alternator, and indeed conventional rotating field alternators can easily be adapted to operate as DC motors operating on the Jones principle.

Typical performance characteristics are shown in the graphs of Figure 13, which are the curves obtained with a 12 volt car alternator which are also three phase rotating field machines. The design of three phase alternators is well known, and it would be useful, therefore, to describe the performance of three phase alternators after conversion to operate as Jones motors, and in this way direct comparisons can be made on cost and performance.


Graphs

Adapting Three Phase Alternators and cancelling inductive e.m.f.'s

Three phase rotating field alternators are converted to DC motors operating on the Jones principle by simply adding inexpensive, low resolution rotor position sensors and driving the motor through a three phase DC/AC inverter. Three rotor position sensors are required, one for each phase, and the signal from each sensor is connected to the inverter which, in turn, connects the phase winding to either the positive or negative DC supply terminal depending on the signal from the rotor position sensor.

The rotor position sensors are arranged such that they can be adjusted in relation to the stator, and it is clear from the above that correct phasing is essential to ensure the inductive e.m.f.'s cancel one another. An approximate setting may be established as follows:

1) Energize the field winding.

2) Inject DC current into one phase, making the phase terminal negative, and provide sufficient current to cause the rotor to move into alignment with the stator poles produced by the phase winding.

3) The rotor position sensor is adjusted so that switching occurs at this pole alignment point and the position of the rotor sensor, as well as the phase and sensor, are marked.

4) The sensor is then connected to the inverter, such that the output of the inverter switches from negative to positive when the rotor is caused to move through this position in the required direction of rotation.

5) Repeat for the other two phases.

6) Connect an anmeter in series with the supply to the stator windings.

The phase sequence of the inverter as determined by the rotor position sensors will now be correct but the switching position may not be the optimum, and high inductive e.m.f.'s can still be produced when switching. It is therefore recommended that initially the motor be energized from a low voltage supply or through resistors to limit current and with the motor running, the rotor position sensor adjusted to give a minimum current reading on the anmeter.

When adjusting the motor at this stage, an incorrect setting can produce a weak field effect reminiscent of series connected DC motors, which will cause both the stator current and the speed to increase. This is a condition which should be avoided by ensuring that the sensor is adjusted to the location which gives minimum stator current.

Once this setting has been found, the current can be increased by removing the temporary resistors, or connecting the motor to the normal supply voltage, and loading to full load torque. Again the rotor position sensor is adjusted to give a minimum current. The dip in current is quite pronounced, and is similar to adjusting the field of a synchronous motor from leading to lagging power factor. Once this location has been found, the sensors can be locked in this position as no further adjustment is required.

Performance Calculations

Whether we start with a purpose designed machine or with an alternator, the finished machine will have three parameters which will give a good approximation of its performance as a DC motor:

1. The back e.m.f. constant, Ke, measured in v rms 1/red 1/see (at a given field current in the case of a wound field machine).

2. The resistance of the stator winding.

3. The iron and mechanical losses due to friction and windage and brush friction, where applicable.

Since it is a DC motor, it is convenient to convert the parameters to their DC equivalent, and we begin with the back e.m.f. The back e.m.f. is measured by driving the machine with another motor and measuring the open-circuit terminal volts (r.m.s. line-to-line), at the rated field current (wound field machine) and the speed of rotation.

The back e.m.f. constant (Ke) is found from:

(6)

but since we are interested in the back e.m.f. reflected to the DC supply:

Ke(DC) = 1.35 Ke(AC) = V.DC 1/rad 1/ sec (7)

Probably the most important parameter for a DC motor is the torque constant Kt, and as in conventional brushed DC motors, the torque constant and back e.m.f. constant are numerically equal:

Kt = Ke(DC) = Nm 1/amp (8)

The second parameter is the resistance of the windings, and from Figures 5, 8 and 11 it can be seen that at any given instant, the phases are connected in a series parallel combination so that if Rp is the resistance per phase, then the resistance (R) reflected to the DC supply:

R = 1.5 Rp = Ohms (9)

The third parameter is the iron and mechanical losses of the machine, which are the same as for other comparable machines. The performance can then be found from the classical DC equation:

(10).

Permanent magnet motors - A brief overview

D.A. Kelly
Technidyne Associates
P.O. Box 11422
CLEARWATER, Florida 34616
United States of America

The quest to develop more efficient means of producing energy has mushroomed in recent years. Continued industrial expansion worldwide, the deleterious impact on the environment of current energy systems and diminishing resources are pressing clean energy solutions. The ideal solution would be a low-, or even a no-power, non-polluting, unlimited resource device that could, upon mass production and deployment, keep up with the growing technology in other fields.

One avenue that is being explored, but without any real tangible success are Permanent Magnet Motors (PMMs).

Of course the PMM is, in a sense, a basically simple notion. It involves the idea of placing magnets in such a position that when they repel each other, they create a spinning motion. What still has to be solved, however, is the ability to have this create enough power to be a viable source of energy.

The first documented PMM was built in 1269 by Peter Peregrinus. However, only in the mid 20th century have real gains been made in this research field.

There was a PMM built in the late '40's which used a 4-wheel/quad design. Each 18" diameter magnetic wheel weighed about 150 lbs.. A picture of this device is shown (Figure 1). That photograph was published in "Transverse Paraphysics" by J. Gallimore in 1982 and his book gave no text explanation.

In 1954, Lee Bowman of California built a small-scale model PMM (Figure 2). The device consisted of three parallel shafts supported in bearings within end plates secured to a solid base plate. Three gears were secured at one end of each of the three shafts, at a two-to-one ratio, with one larger gear on the central shaft. At the opposite end, three discs were secured to the shaft ends with one larger disc on the central shaft, and the two equal size smaller discs on the two outer shafts. The discs were also fixed at a two-to-one ratio, the same as the ratios at the opposite shaft ends. Eight Alnico rod permanent magnets were equally spaced on the large one disc, and four magnets each on the two smaller discs, so that they would coincide in position when the three discs were revolved. The elongated Alnico permanent magnets were placed on each of the discs so that they revolved parallel to the shafts, and their ends passed each other with a close air gap of about .005". When the discs were moved by hand, the magnets passing each other were so phased as to be synchronized at each passing position.


Figure 1. Table-top, 4-Wheel/Quad design Permanent Magnet Motor.

Estimated construction date: late 1940s to early 1950s, shown in, "Transverse paraphysics", 1982 by J. G. Gallimore (Tesla Book Company). Estimated diameter of each magnetic wheel: 15" to 18". Estimated weight of each wheel 150 lbs.

The operation of the magnetic device required the positioning of a single cylindrical permanent magnet which was placed at an angle relative to the lower quadrant of the end discs. This single magnet acted as the actuator magnet which caused the rotation of the disc by unbalancing the magnetic force of the three magnetic discs.

While these early demonstrations did spark some interest, Bowman never received enough financial support to continue his research and his PMM was eventually dismantled and destroyed.

Probably the best-known attempt at a single-rotor PMM was by Howard Johnson (Patent # 4,151,431). But its failure to reach any degree of commercial acceptance must be attributed to its natural slow speed of rotation.

Johnson's model was divided into two types of distinct units, one linear and one rotary, with the most interest directed toward the rotary version due to the possibility of it leading to a no-power input generator. However, after considerable effort and exaggerated success reports, it was obvious that the modest torque produced too low HP.


Figure 2. Bowman Permanent Magnet Motor.

The Johnson rotary PMM consists of multiple, equally spaced rectangular permanent magnets secured to a rotor component, with multiple arcuate, or banana-shaped (special) permanent magnets evenly spaced as the stator P/M's causes a positive preponderance of magnetic force vectors to act on the rotor P/M's, thus causing rotation in one direction. The main problem with these special types of P/M's is that it is impossible to get a major preponderance of magnetic force vectors to act the rotor magnets, so that a severe operational tradeoff must be made in order to achieve some degree of positive rotation.

A magnetic force preponderance of around 25% is about all that can be expected as an optimum value for this type of design, and there is very little that can be done to improve on this basic geometric configuration. Some performance improvement and slight cost-reduction can be expected by switching from the high cost samarium-cobalt P/M's to the latest NIB's (neodynium-iron-boron) P/M's but this factor will not overcome the basic deficiencies of this PMM concept. This research thus proved the impracticality of the single-rotor PMM's.

Another PMM was by Kure-Tekko of Japan. The compound P/M-E/M motor function' in a high-speed motor hybrid environment. The unit consists of a high-induction permanent magnet (samarium/cobalt P/M) located within a plain rotor component which is given an initial rotating impulse by a precisely-timed electromagnetic station slightly offset from top-dead center of the stator.


Figure 3. Kure-Tekko type of permanent magnet motor, with top spinner.

Later, there was a modified version of the Kure-Tekko unit, with a top magnetic attraction spinner added. This unit revolves independently of the main rotor to attract each of the rotor magnets and drives them into a small air gap to start each rotational cycle. The spinner is powered by a 12-volt DC motor. While there is some potential here, and there is a continuous operation of 60 rpm for the main rotor, it cannot be truly considered as completely successful since a major portion of the rotor's torque is from the spinner (and its motor).

Another PMM prototype built in Japan was a large, multi-wheel model built by Kuhei Minato of Tokyo. The interesting feature of Mr. Minato's PMM geometry is that the angling of the magnets on both wheels provides a uniformly shifting attitude between the opposite magnet sets, above and below the wheel centerlines. This uniformly shifting geometry is used to advantage in this concept.


Figure 5. Large multi-wheel prototype by Kouhei Minato of Tokyo.

The most unique feature of the P/M section is the novel uniformly opening spiral path of permanent magnets arrayed as the stator of the unit. This uniformly diminishing repulsive magnetic path directly interacts with the SAM P/M segment, causing a rotational "squeeze" on the rotor, so that it revolves rapidly. Another way to consider the reaction is that the rotor's P/M segment is forced to revolve from a higher, magnetic repulsive potential to a lower one, by natural magnetic potentials.

The specific permanent magnet motor application overcomes one of the serious deficiencies in all previous permanent motor designs, bar none! The usual permanent magnet motor design, such as the Johnson PMM are very much handicapped by low speed operation. However a compound or hybrid E/M P/M unit can result in a significant operating arc over the whole 290 degrees, and this full arcuate motion translates directly into high speed.


Figure 6. Minato's permanent magnet motor. Multi-wheel set up overcomes slow speed problem while torque outputs of rotors become cumulative.

The present dual rotor-spinner PMM concept being developed at Technidyne features the use of a relatively high-speed, motor-driven magnetic "spinner" as the input drive means. The spinner concept was first used on the Kure-Tekko PMM unit, but its form is elongated in this design. The unit employs stationary reactive permanent magnet sets in an arc around the lower portion of the main rotor (not shown). These stator magnets assist the revolving of the main rotor.


Figure 7. Dual rotor and spinner permanent magnet motor by Technidyne, Florida. A high-speed motor-driven magnetic spinner is used as input drive means, after the Kure-Tekko system. The goal is to achieve over-efficiency.

Aneutronic energy - Search for nonradioactive nonproliferating nuclear power

Bogdan Maglich
The Tesla Foundation Inc.
P. O. Box 3037
Princeton, New Jersey 08543
United States of America

Can we design a nuclear power source that -- like Robbie in Asimov's classic tale "I, Robot" -- is preprogrammed never to harm a human?

Can there be a nuclear process whose fuel will never be converted into nuclear weapons?

The recent report (1) of a special committee of the US National Research Council implies that the world may be only one step away from being able to say "yes" to both of these questions. Conclusions of the First International Symposium on Feasibility of Aneutronic Power, held at the Institute for Advanced Study in Princeton in the Fall of 1987, suggest that this last step may well be imminent.

What is aneutronic?

Energy-releasing nuclear reactions involving nonradioactive nuclei (both as the reactants and reaction products) and producing no neutrons have been known for half a century. The first one discovered was the fission of lithium-7 by protons, p+7 Li 2 + 17 MeV, observed at the Cavendish Lab by Cockroft and Walton in 1932. A number of similar fission and fusion reactions were subsequently found. They can be divided into three classes: fission of light metals by protons, fission of light metals by 3He nuclei, which produces protons, and fusion reactions involving 3He, which produce protons. This is why these reactions have been referred to as the "proton-based fuel cycle." We will refer to them as aneutronic reactions.

We define a nuclear reaction as "aneutronic" if not more than 1% of the total energy released is carried by neutrons and if not more than 1% of the reactants ("fuel") and reaction products ("waste") are radionuclides. The definition is somewhat arbitrary and serves only as a guideline. Aneutronic reactions are neither conventional fusion. Their final product in all cases is predominantly helium, a nonradioactive inert gas.

Before the 1970's, no effort was made to develop a reactor based on aneutronic fuels as a power source, even though these reactions have the potential to release twice as much power per fuel weight as uranium fission. This neglect was due to the absence of the necessary technology and lack of ecological or political motivations. Owing to the absence of chain reactions, aneutronic power production has no weapon (i.e., explosive) applications, hence there has no military interest either.

Stimulated by the energy crisis, vigorous studies of aneutronic energy processes were begun in the 1970's by one individual (J. Rand McNally, Jr., Oak Ridge) and six groups worldwide. Described as "advanced-fuel fusion" (1-3), the research encompassed a broad class of reactions ranging from the neutronic DD fusion to the pure aneutronic fusion of Helium-3 or fission of light metals by protons. McNally showed that proton- and 3He-induced fission of light metals could be made into a chainlike reaction, though different from the uranium fission chain (4). Theoretical studies were carried out at the University of Illinois (5), UCLA (6-8), TRW, the Technical University of Graz, Austria (9), and the University of Buenos Aires (10). Experimental work (1,8,9,11,12), supported by theory, was done by Aneutronic Energy Labs of United Sciences, Inc. at Princeton ("AELabs," formerly known as Fusion Energy Corp. a.k.a. Aneutronix).

In 1978, the Department of Energy's Ad Hoc Committee on Fusion, headed by John Foster of TRW and Burton Richter of SLAC, strongly recommended research on the proton-based cycles as an area "in which significant breakthroughs might be made that might change the whole picture" (13). In 1980, the Department of Energy's Research Advisory Board, led by S. Buchsbaum and involving NASA Director J. Fletcher, M. Goldberger and W. Panofsky, et al., recommended a "strong program on the proton-based cycles." (14)

Between 1972 and 1984, 93% (about $21.5 million) of the $23 million in funding for aneutronic research came from the private sector: EPRI ($1.5 million) and AELabs, Inc. ($20 million). Seven percent ($1.5 million) came from the DOE. EPRI and DOE funding of all advanced fuel research was stopped, however, in 1980, after TRW and UCLA groups concluded jointly that Tokamak-type thermal plasma fusion machines could not burn aneutronic fuels (15).

Independently, an MIT study commissioned by the NSF concluded that (a) DT reactors would not be economically competitive with the next generation of fission reactors, and (b) advanced fuel could not burn in a Maxwellian plasma (16). The revival of interest in this energy source in the mid-1980's resulted from the introduction of new concepts that were borrowed from particle physics research.

Success of the Migma IV experiment

AELabs continued its experimental and theoretical efforts, based on a technique derived from colliding beams known as self-colliding orbits or migma-plasma (17). Migma-plasma, a nonthermal (non-Maxwellian) system, appeared to be free from the problems endemic to thermal plasma. It is a hybrid physical state between colliding beams and plasma, but it does not meet the definition of either. In an experiment carried out in 1982, referred to as Migma IV, AELabs demonstrated that a 1-MeV deuteron migma can be neutralized by oscillating electrons and exceed the space charge limit density without instabilities. It was confined for 30 seconds at the fuel densities at which thermal plasma undergoing similar confinement ("simple mirror") breaks down due to disruptive instabilities (18). The technical figure of merit known as the triple product (temperature x confinement time x density) reached 4x10 14 keV per cubic centimeter in Migma IV, exceeding that reached by tokamaks. Fuel density of migma was 1000 times lower than that of the best tokamak but migma's temperature was 100 times higher than that of the best tokamak and its confinement 15 times longer, so that their product is 1,500 times higher than that in tokamak. The migma program had spent $23 million over 10 years. The Western world has spent $10 billion on the conventional plasma fusion program over the past 30 years. Describing this breakthrough in an article entitled "Clean Nuclear Power?" MIT's Technology Review reported in 1982:

"A growing community of physicists believes it may be possible to develop a type of nuclear power that does not require radioactive fuel and does not produce radioactive waste. Unlike today's fission plants or the fusion plants or the fusion plants generally promoted by government-sponsored research, a nuclear power plant of this sort could not be converted to an atomic bomb factory." (19)

At about the same time, Professor Bruno Coppo of MIT showed theoretically that 3He-based fusion, which is nonradioactive and nearly aneutronic, could ignite in a tokamak (20).

Reflecting this development, the Senate's Energy Appropriations Committee stated in 1982:

"To date, basic research in the field of nuclear fission and fusion has largely overlooked the potential for aneutronic nuclear alternatives using light metals, such as lithium, that produce no radioactive side effects. The Committee recommends that the Department of Energy give higher priority to this non-radioactive and nonproliferative nuclear potential. The Department should allocate the funds necessary to conduct a feasibility study to test the conditions necessary to produce a prototype aneutronic reactor, and to fully examine the theoretical principles of self-sustaining aneutronic energy." (21)

The DOE has not responded to the recommendation.

In 1984, Congress asked the Defense Department to undertake a feasibility study of aneutronic energy as a space power source (22). Since an aneutronic reactor would be small and light, unencumbered by massive shielding, it could offer a power source for aerospace, among other applications. Funds for aneutronic energy research were included in the budget for FY 1985. The initial phase of the program sought to establish the concept's feasibility and identify the subsequent steps needed to develop an aneutronic electrical power source for space applications.

In 1985, the US Air Force contracted Aneutronic Energy Laboratories of United Sciences, Inc., to conduct $1 million worth of plasma and energy balance simulation studies using two state-of-the-art CRAY supercomputers. The early results, presented at APS meetings in Washington in May 1986, and in Baltimore in November 1986, indicated that: (a) proton- and 3He-based fission of lithium-6 is indeed a chaining process; (b) in a diamagnetic migma, the scientific power out-to-input ratio Q could exceed 1, as compared to Q in the thermal plasma case; and (c) the 3He+d fusion reactor, as calculated by Coppi, seems feasible with a "scientific" Q = and with 100 times less radioactive waste and 20 times less neutron flux than uranium fission or tritium fusion (23,24).

In 1987, encouraged by the initial results of these computer simulation studies, the Air Force-sponsored R&D program calls for $1.5 million in further feasibility studies for 1987-88, aimed at conceptual point design of an aneutronic reactor for space uses, to be done by AELabs, with Bechtel National, Inc., and others as subcontractors.

In 1986, the US Air Force asked the Air Force Studies Board of the National Academy of Science to form a Committee on Aneutronic Fusion to "provide an assessment of the merits of pursuing aneutronic research for space-based propulsion applications." The committee report was released in the Fall of 1987.

The NRC report singled out one nuclear reaction that is the borderline case of this definition as most promising and closest to realization: the reactor fueled with a mixture of deuterium and helium-3, which is 1-3% neutronic. We refer to it as "DeHe-3 reactor". The report describes it as "more feasible and attractive for select Air Force applications," and states that, for it, "... no insurmountable technical problems are envisioned" after the "... uncertainty of the physics of the fuel containment" is resolved. According to the report, if "further research" on aneutronic fusion fuels like the mixture of deuterium and light helium ("DeHe-3") should "demonstrate that substantial improvements in plasma lifetimes, density, and energy can be obtained," it would become a "viable option" since "no other insurmountable technical problems are envisioned." This means that, in the case of aneutronic fusion, its scientific demonstration will be practically the engineering demonstration, because of the use of aneutronic fuels "to reduce neutrons would reduce shielding requirements, radiation damage to materials, and radioactivity." This is in sharp contrast to the ongoing conventional fusion program (tokamak), which is based on radioactive tritium fuel, which is believed to require 30 years from its scientific to engineering demonstration.

The "further research" referred to in the report is the only active laboratory research program of its kind in the world: that of Aneutronic Energy Labs, Princeton, New Jersey, presently working under USAF funding. It has completed four out of five stages of its migma program of aneutronic energy at a cost of $23,000,000 (see diagram of progress). Its planned fifth and last stage of research is designed exactly to demonstrate the "substantial improvements" cited by the committee as the last step needed to prove the viability. But already in the fourth stage of its research (completed in 1982), the Aneutronic group has exceeded in the technical figure of merit the performance of the 30-year-old deuterium tritium fusion program, on which the Western world alone has spent over $10,000,000,000.

The self-collider or migma device is the only device built, so far, in which the ability to burn aneutronic fuel has been demonstrated. The "uncertainty" referred to in the NRC report arises from the fact that high fuel density is required for net power production, while only low and medium fuel densities were demonstrated in Self-Collider. High fuel densities have been achieved in plasmas but not in a migma; a plasma cannot burn aneutronic fuels because it is not sufficiently hot. Four increase-in-density experiments on self-collider devices were successfully completed by United Sciences, Inc. in the period 1973-82, at a cost of $23,000,000 (1987 dollars), all of which came from private sources. In these experiments, referred to as Migma I, II, III, and IV, the fuel density increased 100 million-fold (see Diagram of Progress). Another 1000-fold density increase is required for a reactor demonstration.

International symposium on aneutronic power

At the initiative of AELabs, the First International Symposium on the feasibility of Aneutronic Power was held at the Institute of Advanced Study, Princeton, NJ, on September 10-11, 1987. About 100 nuclear and plasma scientists from twenty countries the meeting. The keynote speaker was Professor Murray Gell-Mann of Cal Tech, Nobel Laureate in Physics.

The important results of the Symposium were:

- Independent confirmations of the results of United Sciences were reported by research groups (Japan, Austria, MIT and Science Applications International) obtained under its USAF contract on the aneutronic reactor simulation: that the DeHe-3 reactor is only 0.5% radioactive and 1 -2% neutronic, that is, 200 times less radioactive and 100 times less neutronic than any other nuclear power system, fission or fusion.

- Professor M. Rosenbluth, director of the Center for Fusion Studies, University of Texas at Austin, who is considered to be the nation's leading plasma physicist, has produced a theory of migma stability. The results of his research group were presented at the Symposium.

The symposium proceedings are published as a book entitled "Aneutronic Energy" by the international Journal Nuclear Instruments and Methods in Physics Research, which has the largest circulation in the nuclear sciences.

Strategic and commercial ramifications of aneutronic power

A: Aerospace

(1) Reactor weight. The absence of neutrons and radioactivity obviates the need for shielding. Since the weight of shielding in a nuclear fission reactor is at least 100 times greater than that of the reactor itself, a very large power-to-weight ratio (specific power) is projected: I megawatt per ton versus I megawatt per 100 tons with a uranium reactor (Bechtel's engineering studies are aimed at checking this).

(2) Fuel weight. As in all nuclear systems, aneutronic fuel energy is 100,000 times more concentrated than that of non-nuclear fuels, and the fuel weight is negligible: one kilogram of an aneutronic fuel such as helium-3 is equivalent to approximately 100 metric tons of fuel oil.

(3) Cost of fuel is projected to be 10% of the cost of uranium, as displayed below:


Fuel

Supplier

Purity(%)

Cost $K/kg

Unit Fuel Cost
(FBU= 1.0) mil/kw(th) hr

Fusion

D

S.R.L.

99.1

1

0.0008

Conventional Fusion

T

M.L.

94

7,500

42

Aneutronic Fusion

3He

M.L.

99.9

750

4.5

(4) Fuel availability. Helium-3 exists in nature in small quantities, but it can be bred in the same reactor that it fuels. The US government has 500 kilograms in reserve, which would run 200 space-based reactors for 20 years. Helium-3 is bred, and the price quoted is the breeding price. (The proposals to mine it on the moon where it is 10 times more abundant are unnecessary, as any reactor that can burn helium can breed helium from hydrogen).

(5) Plant capital cost per kilowatt capacity is projected to be less than 10% of the fission reactor cost for a large reactor, dropping to 1% for a small reactor (small fission reactors are uneconomical).

(6) Heat loss (waste heat or "heat pollution"). Almost all energy produced in aneutronic reactions is converted into electricity, versus 33% in conventional nuclear reactors. What to do with waste heat with a space reactor is a major technical problem, as the waste heat has nowhere to go (there is no air to conduct it away). An aneutronic plant's waste heat is 10-15% of the total energy generated, while for fission the figure is 67%.

(7) Modular. Units as small as 1 megawatt may be economical; thus a large reactor consisting of many small nuclear power units becomes feasible.

B. Power supply for radar and telecommunications

The smallest aneutronic power plant (30 KWe), similar in size to the proposed Migma V, would have a wide application: this is the power needed to run a radar or CCC station.

C. Naval application

The advantage of lightweight aneutronic power production also applies to ship propulsion, where specific power is less critical than in the aerospace case, and where some shielding (allowing some neutronicism) can be tolerated.

D. Terrestrial applications for utilities

First, an aneutronic reactor can be small, producing 1-10 megawatts of electric power (MWe), while the minimum economical size of a fission or (projected) fusion power plants is about 1000 MWe. Hence, the small nuclear power plant, impossible today, becomes feasible. A small power unit implies mass production, which results in a much lower capital cost per kilowatt of capacity than with large reactors that are built one or two at a time. (Initial capital cost is one of the major barriers to nuclear energy in developing countries and smaller communities of developed countries). Second, there are clear environmental advantages of nonradioactive fuel, nonradioactive waste, and the absence of waste heat (heat pollution).

E. Nonproliferation

Absence of neutrons means that the aneutronic reactor cannot breed plutonium for nuclear weapons. The weapons proliferation issue has been a second major barrier to the free sale of nuclear power plants to the developing world. The combination of a small power unit (small capital cost) and absence of proliferation restrictions would open the way for American nuclear industry for massive export of power density.

Since radioactive fuel, radioactive waste, heat pollution, and proliferation are the main current environmental and political issues for nuclear power, the implications of aneutronic nuclear energy for the environment are obvious: not only an acceptable but an attractive nuclear power technology.

References

1. "First Symposium on Clean Fusion (Advanced Fuel Fusion) 1976". Nuclear Instruments and Methods. 144, p. 1-86. 1977.

2. "Proceedings of the EPRI meeting on Advanced Fuel Fusion, 1978." Electric Power Research Institute, Palo Alto, California.

3. Ashworth, C. P.. "A user's perspective on fusion" Part II, 1977. AAAS Meeting, Denver (Pacific Gas and Electric Co., San Francisco). 1977.

4. McNally, J.R. Jr.. Nuclear Fusion. 11. p. 187. 1971; ibid 18, p. 133. 1978.

5. Miley, G.. Nuclear Instrument and Methods. 144, p. 9. 1977.

6. Conn, R., et al.. 8th International Conference on Plasma Physics and Controlled Nuclear Fusion Research. Brussels 1-10 July 1980. IAEA-CN-38/v-5.

7. Conn, R. and J. Shuy, "p+6 Li ignition and multipoles as advanced fuel cycle reactors". Nuclear Eng. Dept, University of Wisconsin. UWFDM-262. 26 sept. 1978.

8. Dawson, J.. "Advanced fusion reactors". Fusion. Vol. 1,B Academic Press. 1981.

9. Harms, A. and M. Heindler. Acta Phys. Austriaca. 52. p. 201. 1980.

10. Gratton, F.. Atomkernenergie (in English) 32. p. 121. 1978.

11. Ferrer, J. et al. Nuclear Instruments and Methods. 157. p. 269. 1978.

12. Maglich, B.. Atomkernenergie (in English). 32. p. 100. 1978.

13. Foster, J.S. et al. "Final Report of the ad hoc experts group on fusion. U.S. Dept. of Energy. Washington, D.C. June 7, 1978. Summarized in Phys. Today. Sept. 1978. p. 85.

14. Buchsbaum, S. J. et al. "Report on the DOE magnetic fusion program prepared by the Fusion Review Panel of the Energy Research Advisory Board". August 1980.

15. Gordon, J.D. et al. TRW Energy Division Group TRW-FRE-006 and EPRI RP 1663-1. 1981. (unpubl); S. Tamor, SAIC-85/3005/APPAT-63. 1986. (unpubl).

16. Lidsky, L.D. (MIT). "End product economics and fusion research program priorities". Study prepared for National Science Foundation. 1982.

17. Macek, R. and B. Maglich. "Particle accelerators. 1. p.121. 1970.

18. Salameh, D. Al et al. Physical Review Letters. 54. p. 796. 1985; L. Lara and F. Gratton. Physics of Fluids. 29. p. 2332. 1986.

19. Technology Review. Nov. -Dec. 1982. p. 46.

20. Atzemi, S., B. Coppi, "Comments". Plasma Physics. 6. p. 77. 1980; B. Coppi. Physica Scripta, T2. p. 590. 1982.

21. 98th Congress, Senate Report 98-153. June 16, 1983.

22. 98th Congress, H.R. Report of the Comm. on Appropriations 98- 1086. p. 240.

23. "Final report of phase 1 aneutronic power source feasibility study and simulation". USAF Contract F49620-85-C-0098. 1986. United Sciences Report UNS-82-042.

24. Bulletin of the American Physical Society. 31. No. 9, 1557-8. 1986. (Baltimore PPD meeting).

LUMELOID* solar plastic film and LEPCON* submicron dipolar antennae on glass


_____________________________________
* a registered trademark of Phototherm, Inc.

Dr. Alvin M. Marks
Advanced Research Development, Inc.
359R Main Street
ATHOL, Massachusetts 01331
United States of America

A most profound change in the electric utility industry could be wrought by a commercially available, low-cost, efficient source of electric power from the sun.

Examples of such forthcoming solar energy conversion technologies are the LEPCON* and LUMELOID* systems, which are the trademarks of Phototherm, Inc. of Amherst, New Hampshire, a public corporation (OTC), dedicated to the research, development, manufacture and marketing of these products.

Glass panels and plastic sheets of LEPCON* and LUMELOID* respectively convert sunlight to electric power with an efficiency of 70 to 80%, at a cost of $ 0.01 to $ 0.02 per kwhr. The investment in I square meter of a LEPCON* glass panel is about $ 250.00. It produces 500 W of electric power in bright sunlight. The investment then is $ 0.50/W, spread over a life expected to exceed 25 years.

The investment in LUMELOID*, a thin, continuously cast polymer film, including electrodes, and lamination to a supporting sheet is about $ 5/sq m. The investment cost will be $ 0.01/w, spread over an expected life of 6 to 12 months in strong sunlight.

LEPCON* panels are particularly applicable to large-scale solar/electric power farms. They may be sited to produce an average of 400 w/sq m or 400 Mw/sq km during the daytime, for example, in New Mexico, Nevada and similar regions where clouds seldom obscure the sun. An area 200 km x 200 km will produce 16 million Mw at $ 0.01 /kwhr during the daytime hours. Two-thirds of this energy must be stored for use during the dark hours. Electric energy storage technologies are known, and are being developed, which would serve this requirement. This would be enough to supply the electric grid for the entire U.S..

(* a registered trademark of Phototherm, Inc.)

Alternatively, LUMELOID Sheets will be utilized by many consumers of electric power to produce their own electric power. Such sheets may be installed on roofs or building sides, and connected through an electric storage device and an AC/DC inverter to directly provide electric power for all domestic needs at a few cents per kwhr. Excess electric power may be fed into the grid and charged to the local electric company, which will provide standby power to the consumer. The existing electric power grid will be essential, however, for industry and urban use, particularly in those areas where the sunlight is frequently obscured by clouds.

The economic implications

To totally convert the electric utility industry to solar electric power farms using LEPCON* panels will require an investment of trillions of dollars over many years. The economic and health benefits to the nation will be enormous:

1. Lower energy costs

2. Elimination of nuclear hazards

3. Elimination of the need to burn coal or oil fuel, thus diminishing air pollution, and preventing a disastrous Greenhouse Effect.

4. Decreased dependence on foreign oil imports, with consequent improvement in our balance of trade and reduction of the federal deficit.

5. A substantial increase in useful employment on a vast long-term project, which will enable a cutback in the funding of the wasteful military industrial complex.

6. If the electric utility industry becomes involved, as it must, then it can benefit from the large profits to be made in this huge endeavour. To start, it must provide the funds for the R & D, manufacturing facilities and the installation of the LEPCON* and LUMELOID* technologies.

The Technologies

Figure 1 shows a conventional metal-insulator-metal (MIM) tunnel diode, in which two dissimilar metals are separated by a small gap of about 30 Angstroms. Electrons can pass easily in one direction but not in reverse. Each metal has a different work function or natural electric barrier surrounding the metal. An electron moves readily in a metal, as though it were in empty space, but it bounces off a wall of the metal due to the potential barrier at the wall.


Figure 1. Prior art

Figure 2 shows a potential diagram for an MIM diode.


Figure 2. Potential diagram for MIM (metal-insulator-metal) diode

Figure 3 illustrates a quantum property of electrons known as "tunnelling". As an electron approaches a barrier with a small insulating gap, and an electric potential difference across it, it is either transmitted or reflected across the gap without loss of energy. In an MIM diode the electron can move more readily in one direction than in the other.


Figure 3. Quantum-mechanical tunnel penetration of a barrier. A plot of potential energy versus distance for a symmetrical rectangular barrier. E is the kinetic energy of the approaching particle; V is the barrier height above the particle energy; b is the barrier width.

Figure 4 shows a submicron rectenna element of an array in a LEPCON* panel. This element is also known as an "antenna-well diode".


Figure 4. Submicron rectenna element, or "antenna-well diode"

The light photon has an electric field with its direction at right angles to the light ray direction. The photon energy is totally absorbed by an electron in a metal strip. The photon transfers its energy without loss to the electron as kinetic energy in the mostly empty space in the metal. The electron moves parallel to the electric vector of the light, to the right or to the left. If the electron moves to the right, it bounces off the high-potential barrier at the wall, and the moves to the left. So all the electrons eventually approach the tunnel diode on the left, where the electron is either transmitted or reflected without energy loss, as shown in Figure 3; all electrons being eventually transmitted through the tunnel diode without energy loss. However, a potential difference across the diode will convert the kinetic energy to electric energy which will just equal the photon energy. Thus the light photon energy is converted to electric energy without loss.

This differs from the conventional photovoltaic device, which requires that the electron move parallel to the light beam into a semiconductor layer, which it can only do after losing energy, and so the photovoltaic devices are fundamentally flawed.

There are also present many low energy (thermal) electrons in the metal strip which do not take part in this energy conversion except to provide an extra electron for photon-electron interaction; and from another part of the circuit to replace the electrons being transmitted through the diode. In bight sunlight, about one photon-electron energy conversion will occur on the average of every nanosec.

Figure 5 shows a LEPCON* Series-Parallel configuration. Light is resolved into two electric vectors; a first electric vector parallel to the array axis is totally absorbed and converted to electric power; and a second electric vector normal to the array axis is totally transmitted as polarized light. Previous work with polarized light materials and microwave rectannae arrays shows the system to be about 80% efficient. In this system, 40% of the light is then converted to electric power by the antenna array, and 40% is transmitted. The transmitted light may be passed through a second LEPCON* array at right angle to the first, which will convert 80% of the transmitted component to electric power, thus converting a total of 72% of the incident light to electric power.


Figure 5. Series-parallel configuration

Figure 6 shows how a single LEPCON* array may be used in combination with a quarter-wave retardation sheet and light-reflecting layer to accomplish the complete conversion of the incident light to electric power at about the same efficiency.


Figure 6. Array with retardation, sheet

LUMELOID*

A LUMELOID* sheet is a light/electric power converter. The sheet is a thin (8 micrometers, or .0003") polymeric film. The polymer film is prepared by a method similar to that now employed commercially in the manufacture of polarized film, and using much the same equipment, but with a different chemistry, and with electrodes embedded in the film to gather electric power.

Recent work in the field of photosynthesis in green plants has resulted in the synthetic chemicals which mimic the natural process. In the plant the photosynthetic chemical comprises an antenna which is similar to a long-chain carbon molecule, known as polyacetylene. This is attached to an electron donor acceptor complex, here shown as a large ring and a small ring, respectively known as porphyrin and quinone. The long-chain molecule acts as a conductive antenna, which resolves and converts one-half of the photon energy to electron energy, and transmits the other one-half, as described above for the LEPCON*. The electron energy is stored on the large donor ring, and transmitted by tunneling across a small gap which is the insulating chain of carbon atoms, between the large donor ring and the small acceptor ring. The small acceptor ring is then holding the electron at a greater potential, than at the start. So far this is analogous to the LEPCON*.

Figure 8 shows an energy diagram typical of a Donor-Acceptor Complex, which is analogous to the energy diagram shown in Figure 2 for a LEPCON*.


Figure 8. Energy diagram of a typical Donor-Acceptor Complex

However, in the green plant, natural photosynthesis uses the electric energy it has stored on the acceptor to drive the chemical synthesis of the carbohydrates and other complex chemicals in the living cell.

Figures 9 to 12 inclusive show the similarities and differences of LUMELOID* compared to natural photosynthesis. The basic difference is that the chemical synthesis step of the natural photosynthesis process is eliminated, and an entire photosynthetic molecule such as is shown in Figure 7 is connected head-to-tail to another such molecule. This is shown in Figure 12, where the long-chain conductor molecules (52) and the donor-acceptor molecular diodes (53), are oriented parallel to each other and connected head-to-tail within the polymer sheet.


Figure 7. A synthetic molecule which mimics photosynthesis

Figure 9 shows a cross-section of the polymer sheet parallel to the long axis of the photosynthetic molecule. Electrodes 41 and 42 are shown in Figures 9, 11 and 12, connecting the electric power output to the load 75. The light power input is represented by the photon 2, and the direction of the resolved electric vector is along the X-axis.


Figure 9.


Figure 10.


Figure 11.


Figure 12.

Figure 10 is a cross-section through the XOZ plane. The conductive chains are shown as large dots.

The manufacturing process

The manufacturing process resembles the conventional commercial manufacture of polarizing film. In Stage 1, a viscous polymer solution is made with these chemical constituents of polarizing film:

1) Solvent molecules;
2) long-chain polymer molecules;
3) iodine molecules;
4) cross-linking chemicals to tie the chains in a bundle after they are aligned;
5) (OH) groups on the side of the polymer chains to react with the cross-linkers.

In Stage 2, the polymer solution is cast on a moving belt of a non-reactant metal, partially dried to eliminate most of the solvent, and stretch oriented. The result is that the polymer chains are drawn parallel and the cross-linkers hold them that way. The separate iodine molecules now crystallize in the spaces between the parallel chains forming a linear electrical conductor. These react with light photons as described above, only in this case, since polarizers lack molecular diodes and electrodes, the electric power is dissipated internally as heat.

Figure 6 shows the first step in the manufacture of a LUMELOID* film, which is the preparation of the polymer solution similar to that used in the manufacturing process. In this case, however, there is a constituent No. 6 added: the molecular diode. When a molecular diode is exposed to light, its electric charges separate and it acquires a dipole moment; that is, experiences a torque to align it parallel to the direction of the applied electric field.

The final stage in the manufacture of the LUMELOID* is similar to that described for polarizing film, except the additional steps of simultaneously illuminating the film and applying an electric field are utilized in the stretching step, and the electrodes are then subsequently applied.

References

The following US Patents include extensive bibliographies: 4,445,050 (LEPCON*) and 4,574,161 (LUMELOID*). Additional patents have been filed, which will issue in due course, in the US and foreign countries, which contain additional references.

Charged aerosol air purifiers for the suppression of acid rain

Alvin M. Marks
Advanced Research Development, Inc.
359R Main Street
ATHOL, Massachusetts 01331
United States of America

The emission of acid fumes from the combustion of fossil fuels has led to "Acid Rain". The emissions comprise fumes containing sulphur and nitrogen oxides which are converted to sulphuric and nitric acid, organic and metal carcinogens, solid and liquid particulates and carbon dioxide. The fumes originate in the industrial areas of the US and are carried by the wind to other states and to Canada, with a detrimental effect on the environment and health. The increase in carbon dioxide and other chemicals is causing a "Greenhouse effect", with a general warming of the climate of the Earth and possibly disastrous effects on agriculture. Moreover, the ozone layer is being depleted, and harmful ultraviolet rays from the Sun is causing an increase in skin cancer. "Acid Rain" does the most visible damage to trees, lakes, fish, and other wildlife. In recent years, the public has become aware of these problems, and is now aroused. Political action is demanded to clean up this air pollution.

However, proposed methods for the elimination of acid rain and other pollutants have been to costly, or ineffective, and this has impeded progress on the cleanup.

During the early 1940's I became interested in charged aerosols as a means for the direct conversion of the heat-kinetic/power to electric power in a moving gas stream. A result of many years of R & D on charged aerosols are an efficient means to cleanup air pollution. In 1965, 1 appeared before the Los Angeles Air Pollution Control Board and suggested that the smog afflicting that city could be eliminated by a charged aerosol spray from airplanes (6.3.1). In 1967, dramatic demonstrations of the charged aerosol air purifier were made to the predecessor of the U.S. Environmental Protection Agency, the Cincinnati Air Pollution Control Agency, and to the U.S. Senate, fully reported in the extensive testimony of record (10). After all this effort, no action was taken, until in the late 1970's there was a large-scale use by an industrial giant (TRW, Inc.) using micron-sized (not submicron) charged droplets. Many electric power and chemical plants in the U.S., Japan and probably Canada and elsewhere were equipped with large-scale charged aerosol purifiers, resulting in millions of dollars in sales. I endeavoured to collect a royalty without success, because litigation was prohibitive. TRW designed equipment using a charged droplet with too large a diameter. The surface area of the droplet was too small, thus decreasing its absorption effectiveness.

On March 31, 1970, U.S. Patent 3,503,704 was granted to Alvin M. Marks, entitled: "Method and apparatus for suppressing fumes with charged aerosols" for air purification and other uses. Droplets from a capillary tube are passed through an electric field having an intensity just below breakdown. The droplets are thereby broken into minute particles and produce a fine spray. The large surface area of the spray effectively absorbs noxious gases. A simple device was invented based on this principle which should be placed on the smokestacks of every home, building and factory to eliminate noxious fumes at their source. It can be made in small manufacturing facilities, and its use should be mandated by law (2). When a charged aerosol source is placed in a moving gas stream, electric power can be generated in excess of that needed to operate the apparatus. A flow of clean air may be introduced around the capillary tube and charging electrode to avoid fouling and shorting (3). High temperature gases in another arrangement are first partially cooled before introduction to the charged aerosol.

Experimental work on charged aerosol air purifiers is reported (3). A flow of noxious gases and particulates is passed through a conduit pipe. One or more ring-shaped electrodes are mounted downstream of the capillary tubes, and an intense electric field is applied between the capillary and the ring electrodes. A charged aerosol comprising monopolar charged liquid droplets passes through the ring electrode and mixes with the stream of noxious gas. The charged aerosol droplets have a very great surface area and rapidly and substantially completely absorb the noxious gases. As the liquid droplets progress through the conduit, they mutually repel each other by reason of their like charge, and coalesce upon the inner walls of the conduit from which they are collected as a liquid, together with the entrained and absorbed pollutants. Various apparatus employing charged aerosols, mathematical physics equations relating the variables, operating conditions and ranges to achieve high efficiency are derived, and the parameters experimentally evaluated.

It is shown that two charged aerosol air purifiers can be used simultaneously, interconnected as electric generators, so the electric power required to operate the devices is obtained from the gas stream and so is free (3).

Why are charged aerosols so effective, when other methods are relatively ineffective? Because by charging the water droplets, they break up into submicron droplets with over 10,000 times the surface area of the volume of uncharged water droplets; this greatly increases the speed and the amount of noxious gases and particulates absorbed and reacted. Alkaline reactants may be included in the submicron water droplets to neutralize the acid in the noxious gases. The alkaline reacts with sequesters and renders harmless the noxious gases. In stationary air purifiers, chemicals of considerable value may be recovered from the reactants.

Charged aerosol purifiers are simple, low-cost and effective means to eliminate "Acid Rain" which should be attached to every stack or chimney exhausting noxious gases.

Explanation of the Figures

Figure 1 a is a cross-section of a simple charged aerosol air purifier (CAAP). It comprises a single grounded capillary tube mounted axially within a tube. An input of polluted air into the tube passes through a ring electrode charged to about 6 kv (+ or -). When a jet of fluid (water, water solution or suspension) issues from the capillary tube, a spray of submicron mono-charged droplets is produced into the polluted air. The charged droplets have an enormous surface area which quickly absorb with the gas and particulates. The charged droplets and their load of pollutants repel each other and deposit on the wall of the tube where they coalesce to a liquid film which runs off into a waste container. Only clean air is emitted (1).


Figure 1 a.

Figure 1 b is a sectional view of a CAAP mounted on the top of a chimney. The chimney could be on a house burning a fuel such as wood which produces a polluted effluent. The upward flow of the polluted air is redirected downward and passes through a plurality of capillaries which produces a charged aerosol spray. The charged droplets containing the pollutants collect on the wall of the bottom container and run off as waste fluid. Clean air is drawn upward through as central tube aided by a small fan if necessary (2).


Figure 1 b.

Figure 2 shows an airplane with two charged aerosol air purifiers, each mounted under one wing. One or more tanks are provided to hold a supply of alkaline water. Calcium hydroxide may be used as alkali.

However, due to its low solubility in water (7), to obtain a high concentration (for example, 20%), a calcium hydroxide sol should be employed (8). Calcium hydroxide is often used as a fertilizer; and, when reacted with nitrogen oxides produces other fertilizers such as calcium nitrate. Calcium sulphate and calcium sulphite also precipitate, and may be refined to provide pure sulphur. Hence the waste effluent recovered from stationary CAAP may have considerable commercial value. The CAAP with calcium hydroxide in a charged aerosol droplet will react with CO2 in the air and form insoluble calcium carbonate, which precipitates the water solution.

Calcium oxide is a major industrial chemical. In the commercial manufacture of calcium oxide, carbon dioxide is given off when calcium carbonate rock is heated (9). This CO2 goes into the atmosphere and contributes to the Greenhouse Effect. The trend could be reversed if this carbon dioxide were absorbed in greenhouses where useful plants are grown at accelerated rates due to a high concentration of CO2 or if other processes utilizing it in organic synthesis were incorporated in the cycle.


Figure 2.

In Figure 2, a positive (left) and a negative (right) charged aerosols are produced and meet in the atmosphere downstream of the airplane, where they may partially or wholly neutralize the droplets.

Alternatively, it may be advantageous to produce charged aerosols of one charge only. When sprayed over foliage, the charged aerosol particles are attracted to any surface, top or bottom, and may be effective in neutralizing deposited acid on such surfaces. In this case charged aerosol of one sign may be emitted but the airplane will become oppositely charged and must be periodically or continuously discharged to the atmosphere. This may be done, for example, by ion-emitting points. The ion-emitting points may be located at a distance from the charged aerosol source on the wings or fuselage, or in a wire which may trail at a considerable distance. Another method is to alternately emit a positive- and a negative-charged aerosol from the same source (not shown).

Previous experimental work on the Charged Aerosol Wind/Electric Generator Project (4) resulted in a single capillary design suitable for laboratory tests. From this test device the data shown in Figure 3 were obtained, showing that capillaries made by a photo-lithographic process in a 12 thick stainless plate were suitable, and would result in excellent efficiencies of 70% with low water pressures of less than 5 psig.


Figure 3.

Figure 4 shows a charged aerosol source comprising a plurality of capillaries formed in thin stainless steel plates; and, Figures 5a and 5b show front and cross-sectional side views. Figure 6 shows a plurality of charged aerosol sources mounted in one square meter frames, which may be suspended under the airplane (instead of the simple, single capillary shown in Fig. 2).

Calculations were based on these assumed conditions: 1) A spray of 1000 kg of 20% calcium hydroxide suspension in water, to cover one square kilometer of ground or lake; 2) an airplane traveling at 200 m/s; 3) the spray covers a width of 25 m. The results of a calculation shows: About 200 s (3.4 min.) is required to deposit 200 mg/sq m of Ca(OH)2. The charged aerosol is emitted at the rate of 5 cubic meters at 5 psig pressure 33,000 newtons/sq m (Pascals). The array which may comprise 75 Em orifices shown in Figure 6.


Figure 4,5,6.

In this example about 25 KW of electric power is required by the pumps and electrical system. The electric power may be supplied by a charged aerosol generator placed downstream of the charged aerosol jets, as shown in Figure 2. The example is illustrative. The parameters may be modified by actual environmental data on acid rain.

References

1. U.S. Patent 3,503,704 issued to Alvin M. Marks on March 31, 1970. "Method and apparatus for suppressing fumes with charged aerosols"

2. U.S. Patent 3,502,662 issued to Alvin M. Marks on July 14, 1970. "Smokestack aerosol gas purifier".

3. U.S. Patent 3,960,505 issued to Alvin M. Marks on June 1, 1976. "Electrostatic air purifier using charged droplets".

4. Marks, Alvin M.. "A charged aerosol wind/electric power generator using induction electric charging with a microwater jet". 17th IECEC Proceedings. 1982.

5. List of 18 patents in field of charge aerosols.

6. Additional information on: papers, Patents and publicity, by categories:
.1 Wind/Electric Power Generator
.1.1 ibid 4.
.1.2 Science and Mechanics. Winter 1980, Cover Story "Incredible power fence" by James Hyypea.
.1.3 New York Times, January 25, 1984, "Generating low cost electricity"
.1.4 Patents: 4,206,396 and 4,443,248
.2 Heat/Electric (Tin-Aerosol) Generator
.2.1 17th IECEC Proceedings, 1982 "An Electrothermodynamic Ericsson Cycle heat/electric generator" by Alvin M. Marks.
.2.2 Science and Mechanics. September-October 1983. "Amazing tin aerosol generator" by James Hyypea
.2.3 Patents 4,395,648, 4,523,112 and 4,677,326
.2.4 19th IECEC Proceedings. 1984. "Electrothermodynamic equations of a charged aerosol generator"
.3 Charged Aerosol Air Purifier
.3.1 Los Angeles Herald-Examiner, January 25, 1965. "Instant rain to kill smog"
.3.2 Modern Plastics Magazine, August 1970. "More help for solid-waste disposability"
.3.3 Chemical Engineering. August 21, 1972.
.3.4 Patents 3,503,704, 3,520,662 and 3,960,505

7. CRC Handbook of Chemistry and Physics. 65th Ed. 1984-1985, Page B 82, No. c 124, Calcium Hydroxide Solubility in Water": 0.186 g/100 g of water at 0 C; 0.077 at 100 C. Molecular Weight 74.09.

8. The Merck Index Tenth Edition. 1983. p. 1657. No. 1663, Calcium Oxide.

9. General Chemistry, 2nd Edition. McQuarrie and Rock, W.H. Freeman and Company. p. 120-1.

10. Hearings before the Subcommittee on Antitrust and Monopoly of the Committee on the Judiciary, United States Senate, Ninetieth Congress First Session Pursuant to S. Res. 26; Part 6; "New technologies and concentration", October 2,3,4,6, 1967; Testimony of Alvin M. Marks, p 3340-60; on the air purifier, illustrations p 3352-53; discussion p 3360.

Clean engines - A combination of advanced materials and a new engine design

James E. Smith
Randolph A. Churchill
Jacky Prucz
West Virginia University
MORGANTOWN, West Virginia 26506
United States of America

The initial concept behind the Stiller-Smith Engine was stimulated by a children's toy called a "do-nothing" machine. The toy is a simple wooden block divided into quarters by two grooves through which two small wooden pieces slide when the handle is turned (Figure 1). If the linkage that constitutes the handle is rotated at constant angular velocity the action of the sliders is sinusoidal, linear motion.

The "do-nothing" machine is in essence a kinematic inversion of a Scotch yoke often called an eliptic trammel. Or, in Beyer (1) it is referred to as a double slider or swinging cross slider crank device (Figure 2). The principal of this device was used by Leonardo da Vinci as his "oval-former" to shape and cut ellipses. Further work on identifying the motion of this device was described in 1875 by Reuleaux (2) and also by Prechtel (3).

To understand the Stiller-Smith Mechanism first consider the motion of the double-cross slider. Figure 2 is a pictorial representation illustrating that the center point of the cross-slider link travels in a circular path.

Previous attempts to utilize a double-cross slider took advantage of this translation component. Figure 2 further indicates how the area around point B rotates in a direction counter to that of the translation. It is this rotational phenomenon which is harnessed by the Stiller-Smith Mechanism. Note that all points on the connecting "bar" travel in an ellipse; point B being unique, travels in a circle.

Engine description

Though the Stiller-Smith Mechanism is planar and the resulting engine has planar characteristics, there is a certain 3-dimensionality to the actual construction. The mechanism may be used in a number of configurations, however, the fundamental version involves four cylinders in a cruciform layout. Each piston is directly connected to another piston across from it via a non-articulating connecting rod. This arrangement promotes an efficient exhaust cycle opposite the firing cylinder. The two connecting rods are normally perpendicular to each other (Figure 3) (4).


Figure 1. The "do-nothing machine"


Figure 2. The double cross-slider motion


Figure 3. Engine component layout model

At the center of each rod is a yoke for a radial bearing. The trammel link is now a circular gear with pins protruding, one in each direction. This lets the "floating gear" sit between the two connecting rods with a pin in each bearing/yoke. Output shafts are placed in opposite corners from each other with eccentrically mounted gears meshed with the floating gear. Figure 4 illustrates this layout as it appears in the actual prototype.


Figure 4. Stiller-Smith engine prototype floating gear mechanism

The block has linear bearings for the connecting rods to pass through which serve to isolate the combustion process from the mechanism. They also provide a scavenging area behind each piston for two-cycle operation, if preferred. (Two-cycle cylinders were easily adapted to the prototype shown by bolting "off-the-shelf" cylinders onto the square block.) The completed engine with mounts and test stand is shown in Figure 5. The engine is in an upright configuration to facilitate testing but it may also be placed and oriented in a variety of positions both vertical and horizontal.


Figure 5. Stiller-Smith engine prototype.

Device advantages

Exciting advantages are predicted using this mechanism in an internal combustion engine. Performance reliability and user familiarity must first be developed before large-scale commercialization can be realized. Until that time this type of engine may find application in research fields as a test frame for materials or other engine systems (5). Potential advantages of this device can be summarized as follows:

Table 1. Potential Advantages

1.

Increased tap/weight ratio

2.

Fewer moving parts

3.

Improved balancing characteristics

4.

Sinusoidal piston motion

5.

Orientation variability

6.

Isolated combustion/motion conversion processes

7.

Non-articulating connecting rod

8.

Improved ignition delay characteristics

9.

Commonality of components

10.

Decreased maintenance/down-time

These potential advantages are viewed as fundamental requirements for improvements in efficiency and in the processes relating to environmental improvements. Being lighter and smaller reduces vehicle size requirements and decreases the fuel consumption rates. The utilization of some of the specialty materials will allow for higher combustion chamber temperatures and thus the use of heavier or multiple fuels. In addition to the end-use benefits derived by the consumer using this engine, the user-transparent expenses (both monetary and environmental) of producing the engine and the fuels to operate it would be greatly reduced.

To make an engine of this type, or any engine, effective in a clean energy environment it must be able to operate reliably at its maximum efficiency. To achieve this goal new materials and techniques must be adopted. The following section briefly details some of the materials and processes that will be used in the future to maximize the effective use of fuel resources.

Ceramic materials

Tremendous achievements in materials science have been presented for several years; no longer is a designer restrained to using simple irons and metals. New materials, however, both complicate and enhance the design procedure. The breadth of material possibilities and speed of new material introduction makes it difficult to remain up-to-date with available products. With this expansive selection though comes the possibility of vastly improving current products and promoting new ones. Plastics, polymers, ceramics, composites, and even organic materials are quickly being used in more applications.

The application of ceramic materials to internal combustion engines has become an important research and production strategy. High insulating characteristics of these ceramics are tested in an attempt to reduce the heat transfer away from the hot combustion gases. Ceramics have other advantages such as lower density which decreases moving mass. (This eases valve train requirements.)

Progress in raising combustion temperatures in the early days of engine design was restricted by the limitations of cast irons and other construction materials. Thick walled combustion chambers were built to conduct heat away from the burning gases in the cylinder. Little else was accomplished in the fields of reducing heat transfer and raising engine cylinder temperatures until after World War II. Materials that were then examined included glass derivatives and others thought to have low thermal conductivities (6). Glass has excellent insulating qualities, low expansion ratios, low cost, but unfortunately lacks sufficient strength for engines. Interest in glass (ceramic) coatings for engine components dates back to the 1950's (7). Some ceramics have been used in engines in small quantities, for example spark plug cases. (These act as an electrical insulator more than a thermal insulator.)

Not until the mid 1970's was significant progress made in combustion chamber materials. At this time compounds of silicon-carbide (SIC) and silicon-nitride (Si3N4) were made and used in cylinder construction. In Table 2 are listed some popular engine materials and their properties (8).

For example, SiC is a common abrasive used industrially. Advantages of SiC include good wear, half the density of steel (reduced inertia), high temperature capabilities (sintered at 2200 C), low coefficient of friction, low cost fabrication, good corrosion resistance (9). Disadvantages are the material's brittleness, notch sensitivity, 18 % shrinking during sintering. SiC also is a poor insulator causing large induced thermal stresses. It is used in some high temperature internal combustion engine applications and is to some degree successful. Efforts with SiC now center on monolithic structures (one piece construction) because of their low cost and ease of manufacturing.

Table 2. Engine structural and insulating materials


Ultimate Flexure Strength

Den

Young's Modulus
1260^K

Coeff of
Therm Exp. 300-1260^K

Coeff of
Therm Cond

Material

MP3

g/cc

GP3

^K

W/m^K

Si3N4

300

3.1

300

3.2

12

SiC

450

3.15

400

4.5

40

AMS

20

2.2

12

0.6

1

ZrO2

300

5.7

200

9.8

2.5

Al2O3, TiO2

20

3.2

23

3.0

2

Cast Iron

275

7.6

85

10

75

Silicon Nitride is still a popular constituent for engines and is currently being investigated by GTE laboratories (10). GTE has developed a composite structure which is a silicon nitride matrix with a silicon carbide whisker component. Such structure should decrease ceramic brittleness in engines. Increased resistance to cracking and breaking are advantages of a whisker reinforced material over a non-reinforced one. As with SiC the low densities of these materials make them especially adaptable to reciprocating or oscillating parts such as pistons and valves.

Aluminium titanate (Al2O3, TiO2) has a desirable low thermal conductivity. However the low strength of the material requires that a supporting base material be used such as a metallic substrate. The low density of this material makes it a desirable material for oscillating parts where component mass is an a important consideration. Piston inserts and exhaust system liners are good applications of aluminum titanate.

Aluminum magnesium silicate (AMS) is another candidate that has a low coefficient of thermal expansion but poor strength. A particularly low coefficient of thermal expansion and a high resistance to thermal shock make this material applicable to situations of transient changing thermal loads but not mechanical loads. Difficulty in joining with metal structure would be encountered due to large discrepancies in expansion coefficients.

Another ceramic application, although not shown in the table, should be presented. That is the use of ceramic fibers of aluminum oxide as reinforcing material in nonferrous metals (11). Fiber quantities of 35-40% by volume in cast aluminum parts show rigidity improvements of three or four times the conventional. The problem with this concept is production cost.

Opinions differ slightly on what the desired requirements should be for new materials in engine components. The largest discrepancies are in thermal conductivity. Some researchers like the concept of high conductivity and others low (12,13). Another consideration is the thermal capacity of the material. Ceramics have about half the capacity of metal substance (11). This is possible because of the lower density and conductivity. Lower thermal conductance means that less heat is dissipated into the metal structure from a ceramic lining or insert. A ceramic combustion chamber surface will reach cyclic operating conditions faster than a similar metal surface.

Zirconia is a ceramic material that has very low thermal conductivity values, good strength, thermal expansion coefficients similar to metals, and is able to withstand much higher temperatures than metals. Full stabilization of zirconia can be accomplished with the addition of CaO, MgO, or Y203.

Alloys with 20% yttria or 5% calcia create fully stabilized zirconia with good thermal coefficients of expansion (14). Unfortunately, these tend to crack and spell off quickly. Resistance of yttria to fuel impurities is low which decreases its reliability in an engine cylinder environment.

Magnesium or nickel have been added to PSZ to improve strength and ductility characteristics respectively. Magnesia partially stabilized zirconia (MgPSZ) has a thermal expansion coefficient and elastic modulus close to that of iron and steel and is suitable for liners, valve guides and seats, hot plates, tappet inserts, and piston caps in the engine cylinder. MgPSZ is made of 20-24% magnesia. This has the highest fracture toughness of all the PSZ materials yet developed. Curing of MgPSZ is performed at 1700 C with an after density of 5.70-5.76g/cubic cm.

Nickel PSZ has been developed in hopes of producing a material that is more ductile than other ceramics. In the harsh transient atmospheres of an IC engine this quality will help compensate for the induced thermal and mechanical stresses. Nickel acts as a spherical ball bearing on the molecular level, due to its size, that allows the molecules of the material to slide over each other.(15,16) This decreases the possibilities of sudden brittle failure.

Another promising material is Syalon (Si-Al-O-N) ceramics systems of silicon, aluminum, oxygen, and nitrogen (17). Silicon Nitride mentioned above is one derivative of this classification. A major advantage of the material is the low creep characteristics at high temperatures. Properties are held to around 1400 C. The materials also has a low density and low coefficient of friction. This will be good for reciprocating parts such as valves and bearings.

Several other materials are presently being investigated for possible use in engine applications. One such coating material which exhibits very promising, high emissivity characteristics is called, simply, Ceramic Refractory type CT (18,19). High emissivity actually decreases the heat transfer effects by reflecting much of the heat back at the combustion process. Figure 6 illustrates the emissivity of common materials.


Figure 6. Emissivity characteristics of ceramic CT

At 3000 F the emissivity is listed at 98%. This material is water based silica-alumina and is only good as a coating over an existing structure. Monolithic pieces have not yet been developed. It is now used in furnaces, boilers, on quench baskets and racks, and potentially may improve fuel efficiencies. Recent developments in surface finishes and hardness have made this material attractive to engine designers. Exact finishes are possible and the coating is relatively easy to apply to the structure.

Note that emissivity is considered more important than the heat transfer characteristics. Reflecting the heat away from the surface can be achieved without a dangerous rise in the surface temperature. Adding Carbon Black will further increase emissivity at the expense of some strength. These materials were also designed to resist corrosion, be very durable, and be non-toxic or flammable.

This refractory material also has an interesting cured structure that is worth presenting. The material is sprayed on much as a paint would be. Drying takes between 12 and 24 hours and then the piece is cured. This involves bringing the piece up to operating temperature. With this curing process the coating divides into three distinctive layers as shown in Figure 7.


Figure 7. Layered structure of silica-alumina based mineral

Closest to the base material is a bonding or fusion layer that forms a chemical bond with the surface. This has proven to be very durable and able to withstand any corrosion. The middle layer is one of expansion. This accommodates any difference in thermal expansions of the other two layers. The outer layer will be under more severe thermal stresses and want to expand more than the others. This layer allows this to happen without impeding the bonding layer. The outer layer is case hardened to provide a hard surface against wear. The material is also shown to decrease the amount of deposition onto cylinder walls, due to the non-reactive nature of the surface. The manufacturer indicates that Caterpillar put this material onto a cylinder wall in a test engine and that Purdue University also did some work with this material in engines.

Other ceramic, composite, or advanced materials are being developed for different engine applications. Use of the lighter weight ceramics should improve response time in moving parts. Engine drive train components may use ceramic materials due to their improved hardness and wear capabilities. Friction coefficients and contact surfaces may also be improved. Some of the materials listed above may also work well as components in exhaust turbines, as rotors and port linings. Particulate filters may be another useful application of ceramics or composites.

While the information listed above is only a cursory review of ceramics and their potential importance to clean engines, it is presented to stress the complexity and breadth of the art and the importance of expanding this science. Each of these materials, a combination thereof or one yet to be discovered, will help solve the combustion chamber materials problems, but only after exhaustive years of research.

Composite materials

Analysis and use of composite materials in internal combustion engines has become an important research topic (20). Several modeling approaches examine the elasto-dynamic response of composite connecting rods in an engine. The emphasis is on tailoring the material system and lay-up to the need of reducing the magnitude and elasto-dynamic oscillations of bearing loads. This constitutive modeling technique for composite links is, consequently, more complete and accurate since it tries to account for strength characteristics, extensional deformations and elastic coupling effects not included in prior efforts (21,22).

As an example the physical model and notation selected for the analysis of an in-line slider-crank mechanism are shown in Figure 8. The crank-shaft is rotating at a constant angular velocity and there is no energy dissipation in the system, due to either friction or material damping. The fiber-reinforced connecting rod is the only elastic link of the mechanism (Figure 9); other links are made of steel and considered rigid. A large-mass flywheel is presumed to be attached to the crank-shaft in order to keep its angular velocity constant.

A parametric study has been carried out on the analytical model described above. Its primary objective is to investigate the effects of the material design selected for the connecting rod on the dynamic response of the overall mechanism (20). The response characteristic chosen for this analysis is the vertical bearing load in the wrist-pin, which determines the side-wall force on the piston. The main input parameters are material properties and laminate lay-up of the fiber-reinforced connecting rod.


Figure 8. Physical model and notation.

The time variation of the side-wall force on the piston during a one cycle operation is shown in Figure 10 as reductions in side-wall force magnitude; which is due to superior longitudinal strength and lower mass density.

These results indicate an upper limit for such benefits as they correspond to unidirectional fiber orientation along the connecting rod axis, which is the optimal lay-up for the particular loading case considered here. Lower benefits should be expected for different lay-ups which may be required by more general loading conditions. Besides mass characteristics, elasto-dynamic oscillations of the side-wall force magnitude are governed in this model primarily by the extensional and flexural stiffness of the connecting rod along its longitudinal axis. Since steel is stiffer than fiber-reinforced composites selected for this investigation, the elasto-dynamic benefits of using composite, rather than steel, connecting rods are due to reduced inertia effects.

Summary

Research and technology are at hand to create new concepts in internal combustion engines. To take advantage of these, re-direction of present manufacturing is required, while not demanding the retooling of a whole industry, only rethinking about the way we convert our fuel to power.


Figure 9. Schematic of the ith ply in the connecting rod.


Figure 10. Side-wall forces predicted from rigid-body dynamics for various materials.

The composite material model presented permits a quantitative evaluation of the elasto-dynamic effects associated with material systems and lay-ups selected for fiber-reinforced tubular connecting rods in engines. It may be regarded as a step toward the development of a rational methodology for tailoring the directional dependent properties of composite-made machine parts to functional requirements of high-speed linkages. This is only one of many new techniques that are presently being used or emerging.

The Stiller-Smith Engine could revolutionize internal combustion engine design. The need for cleaner energy conversion and improved performance makes this engine very attractive for further study. What will make it become a leader in clean energy conversion will be the effective use of advanced materials, which is the primary goal of this on-going project.

References

1. Beyer, Rudolf. "The kinematic synthesis of mechanisms". McGraw Hill, New York, p. 62.

2. Reuleaux, F.. "Theoretische kinematik". Friedrich Veiweg & Sohn. Braunschweig. 1875. p. 336.

3. Prechtel, "Technologische Enzyklopadie". (4) p. 425; (7) p. 246.

4. Smith, James E.. "The dynamic analysis of an elliptical mechanism for possible application to an internal combustion engine with a floating crank". Dissertation. West Virginia University. 1984.

5. George, Aaron C.. "A method for predicting cylinder pressure in the Stiller-Smith or other sinusoidal type engines". Thesis. West Virginia University. 1988.

6. Kamo, R.(Cummins) and Bryzik, W. (U.S. Army TARADCOM). "Adiabatic turbocompound engine performance prediction". SAE 780068.

7. French, D.D.J. (Ricardo Consulting Engineers) "Ceramics in reciprocating internal combustion engines". SAE 841135.

8. Kamo, R. and Bryzik, W., "Uncooled, unlubricated diesel?" Automotive Engineering, Vol. 87, n. 6. June 1979. pp. 59-61.

9. Timoney, S., (University College of Dublin, Ireland) and Flynn, G. (Carborundum Resistant Materials Co.). "A low friction, unlubricated SiC diesel engine". SAE 830313.

10. "Ceramic Composite". Automotive Engineering. December 1986. p. 22.

11. Walzer, Peter, Harmut Heinrich, and Manfred Langer. "Ceramic components in passenger-car diesel engine". SAE 850567.

12. Bryzik, Walter and Roy Kamo. "TACOM/Cummins adiabatic engine program". SAE 830314.

13. Marmach, M., et al. "Toughened PSZ ceramic: their role as advanced engine components". SAE 830318.

14. Moorhouse, Peter and Michael P. Johnson. (NEI-International Research and Development Co. Ltd.). "Development of tribological surfaces and insulating coatings for diesel engines". SAE 870161. SP-700.

15. Larson, D.C., J. W. Adams, L. R. Johnson, A. P. S. Teotia, and L. G. Hill. "Ceramic materials for advanced heat engines". Noyes. New Jersey. 1985.

16. Carr, Jeffrey and Jack Jones (Kaman Sciences Corp.). "Post densified Cr2O3 Coatings for adiabatic engines". SAE 840432. SP-571.

17. Lumby, R. J., P. Hodgson, N. E. Cother and A. Szweda (Lucas Cookson Syalon Limited). "Syalon ceramics for advanced engine components". SAE 850521.

18. Hellander, John C. (Ceramic-Refractory Corporation, Transfer, PA). SAE Pittsburgh Section. 2/17/87.

19. Ceramic-Refractory Corp. information sheet, Pittsburgh, PA.

20. Prucz, Jacky., Joseph D'Acquisto, James Smith. "Performance enhancement of flexible linkages by using fiber-reinforced composites", American Institute of Aeronautics and Astronautics, Inc., 1988.

21. Thompson, B.S., D. Zuccaro, D. Gamache and M. V. Gandhi. "An experimental and analytical study of a four bar mechanism with links fabricated from a fiber-reinforced composite material". Mechanism & Machine Theory. (18) 2. 1983. p. 165-71.

22. Sung, C.K., B. S. Thompson. "Material selection: an important parameter in the design of high-speed linkages". Mechanism & Machine Theory. (19) 4/5. 1984. p. 389-96.

23. Churchill, R.A., J.E. Smith, N. M. Clark and R. Turton. "Low-heat rejection engines - a concept review". SAE 880014 (International Congress and Exposition). Feb 29 - Mar 4, 1988.

Developments on the flexible mirror

Scott Strachan
6 Marchhall Court,
EDINBURGH EH 16 5HN
United Kingdom

Over the past few years our engineering group in Scotland have been working with the concept of optical grade variable focus mirrors.

A group at Strathclyde University had devoted considerable research effort to the problem of generating a parabolic figure from a pressure-stressed membrane and had expanded on the work of Mueller in this field, mainly by introducing a separate stretching frame in addition to the mounting rim of the membrane mirror itself.

Our group concluded that this approach was probably limited, in that even if successful, the cost of the final structure was unlikely to be competitive with conventional glass mirrors.

We therefore adopted the alternative approach of aiming at maximum repeatability of a predictable figure even if this figure was not ideally parabolic.

We believed that the error from parabolic and the inevitable slight astigmatic distortion could probably be corrected at low cost with a small diameter distortable plastic miniscus lens.

We found that provided the membrane was bonded to a rim of sufficient accuracy, that is approximately 1/60000th of the diameter as a tolerance in the height of the rim and 1/6000th circularity tolerance, that the figure was entirely predictable for any membrane which has a thickness of between 1/1000th and 1/5000th of the diameter.

Such mirrors form near parabolic figures in the focal range of F5. to F8. The parabolic error can be corrected by a miniscus lens at a cost of less than $10.00. The resulting mirror has a performance equivalent to a glass mirror with a figure accuracy of l/8th of a wave of (yellow) light.

We are now in small scale production of these mirrors and have even at our present level of production achieved the low cost of $600.00 for a variable focus 300 mm mirror with an uncorrected accuracy of 1/6th of a wave.

These mirrors have many uses. Examples are as light intensifiers thermal amplifiers, laser colimators, variable focus-cutting laser optics and, of course, telescopes. Though the mirror accuracy and stability and will never seriously compete with the best glass mirrors the advantage of variable focus means that there are many applications not open to glass. When the low cost is taken into account, there are many applications for large aperture mirrors which were simply not worth the cost of a glass mirror.

TO PREVIOUS SECTION OF BOOK TO NEXT SECTION OF BOOK