Saturday, June 30, 2007

Sustainability Council of NZ a Fundamentalist Creationist Sect

The Second Law of Thermodynamics is one of the pillars of the physical sciences.It has withstood the test of time,including numerous,often ingenious efforts to find exceptions or dispute its hegemony.

It was the physicist Erwin Schrödinger,in his legendary book What is Life? (1945), who catalyzed the modern approach to thermodynamics and evolution.He characterized a living system as being, quintessentially,an embodiment of thermodynamic order and disorder.

Harold Morowitz, one of the leading figures in biophysics and a major contributor to our collective effort to understand more fully the origins of life, inadvertently provided an illustration of the need for a broad, thermoeconomics paradigm in his path-breaking (and still valuable) volume on Energy Flow and Biology (1968). Recall how he proposed that the evolutionary process has been "driven" by the self-organizing influence of energy flows, mainly from the sun: "The flow of energy through a system acts to organize that system...Biological phenomena are ultimately consequences of the laws of physics" (p. 2).

In the penultimate chapter, where he explored ecological aspects of energy flows, Morowitz admitted "at this point, our analysis of ecology as well as evolution appears to be missing a principle" (p 120). His conclusion: Although the flow of energy may be a necessary condition to induce molecular organization, "contrary to the usual situation in thermodynamics...the presence or absence of phosphorous would totally and completely alter the entire character of the biosphere" (p. 121).

Furthermore, as Morowitz noted earlier in his text, the lowest trophic level in the food chain is dependent on exogenous sources of free nitrogen, which would otherwise be a limiting condition (Liebig’s Limit) for the entire biosphere (as opposed to the abundant supply of energy). Finally, and most significant, Morowitz acknowledged that the functionally organized cyclical flow of matter and energy in nature requires a cybernetic explanation. "The existence of cycles implies that feedback must be operative in the system. Therefore, the general notions of control theory [cybernetics] and the general properties of servo networks must be characteristic of biological systems at the most fundamental level of operation" (p. 120). Exactly so. Biological evolution takes place within a situation-specific array of constraints and needed “resources”, and its course is also greatly affected by various kinds of “control information”

Morowitz outlined four rules that bound the construction of “scientific” hypothesis and limit the ability of “men to play god”

Two that are appropriate here are,

1 Though shall not violate the laws of physics and chemistry, for these are expressions of divine eminence.

2 Though shall not eschew miracles, for as Spinoza taught, they contravene the lawfulness of the Universe.

The Sustainability council has published a paper entitled “A convenient Untruth” which go on to prove over 56 pages that indeed their solutions are that indeed. Further their “eschewing of miracles” prove that their creationist beliefs as seen in the title of the organization transgress the laws of physics and chemistry.

Their religious testament in the summary of the above paper shows that their ‘illfounded beliefs are based on a substantial gap of scientific acumen.

Agriculture has the potential to substantially reduce the nation’s greenhouse gas emissions. At a profit, the sector could meet its share of New Zealand’s emission reduction target under the Kyoto Protocol. As livestock accounts for half the nation’s total emissions, this would in turn meet about half the total excess emissions currently projected. This potential can be achieved through abatement of nitrous oxide emissions alone and is considerably greater than has generally been acknowledged.

The role of VOC and organic hydrocarbons are important drivers of both the Biosphere’s attenuation and amplification mechanisms that ALLOW life to exist on earth. Through the nitrogen biogeochemical cycle,and the ozone cycle. Here we will examine the atmospheric cycle.

The quantitative discussion of the impact of short-lived air pollutants such as NOx, CO, and NMHC on global warming has been extremely difficult. These gases do not have greenhouse effect of their own, but since they control the concentration levels of the main greenhouse gases - methane, ozone, CFC substitutes (HCFC) - they are called indirect greenhouse gases. The recently released IPCC Report stated that, along with greenhouse gases, the reduction of gas emissions that affect their concentrations is necessary for stabilizing radiative forcing.

(1) Indirect greenhouse gases (NOx, CO, NMHC, etc.) trigger a photochemical reactions when they are irradiated by ultraviolet light in the troposphere, thereby leading to the production of ozone and OH radicals.

(2) Since ozone is a potent greenhouse gas, reducing tropospheric ozone by controlling emissions is effective in curbing global warming. It is already known that the reduction of tropospheric ozone is most effectively achieved by controlling NOx concentrations. However since ozone has a short life cycle (1 to 2 weeks in summer, and approximately 2 months in winter), the abatement of global warming due to a decrease in NOx levels takes effect almost concurrently with the reduction of NOx in the short term.

(3) On the other hand, OH radicals react with another greenhouse gas, methane that has the effect of eliminating methane from the atmosphere. When NOx levels are lowered OH radicals also decrease, thereby raising methane concentrations in the atmosphere. Rather than mitigating the problem, global warming is advanced due to methane's greenhouse effect. Moreover, methane has a long life cycle in the atmosphere (approximately 10 years), so the escalation in global warming caused by the reduction of NOx will continue for some decades after NOx levels have been lowered.

(4)It follows that an assessment of the warming effect induced by a reduction of NOx requires an evaluation of the short-term depletion of ozone together with the long-term increase in methane. By conducting a quantitative assessment combining these two factors, lowering NOx levels only has a positive greenhouse effect for a period of several years and could result in accelerated global warming.

The methane budget evolution is also affected by changes to atmospheric sinks. The principal sink (50-85%) is in situ oxidation by the OH radical, generated photolytically at a rate that depends in part upon the local presence of emissions such as hydrocarbons and other volatile organic compounds,CO and NOx. Other sinks include consumption by methanotrophic biota in aerated soils, transport to and destruction in the stratosphere (Prather et al., 2001), and removal by other tropospheric oxidants such as active chlorine (Allan et al., 2001a). Quantitative assessment of the global OH trend is difficult over any time scale. However, various modelling studies agree qualitatively that global OH levels have declined over the industrial era (Houweling et al., 2000 Prinn et al 2001) though quantitative estimates vary over the range 7.5 to 27%. Nonetheless, during recent decades that decline may have been arrested (Lelieveld et al., 2002) or, within decadal intervals

Ozone photochemistry is driven by the interaction of the Sun's radiation with various gases in the atmosphere, particularly oxygen. The understanding of the basics of ozone photochemistry began with Chapman (1930), who hypothesized that UV radiation was responsible for ozone production and proceeded to lay the foundation of stratospheric photochemistry: the Chapman reactions. He proposed that atomic oxygen is formed by the splitting (dissociation) of O2 by high energy ultraviolet photons (i.e., packets of light energy with wavelengths shorter than 242 nanometers)

Ultraviolet (UV) radiation, though only a small fraction of the total solar output, is of remarkable importance to our planet. Ozone strongly absorbs UV radiation. As an example, if we looked at the top of the atmosphere and counted only the 250-nm wavelength photons striking a 1-square centimeter area every second, we would count about 6,800,000,000,000 (that's 6.8 trillion or 6.8 x 1012) photons. Yet ozone is effective at absorbing these 250-nm photons. The ozone molecule is dissociated by these UV photons into O and O2 via the reaction O3 + hc/lambda --> O2 + O

Because the O atoms have such short lifetimes, they quickly reform ozone after dissociation, converting the energy of the photons at these wavelengths into thermal energy. Ozone is formed when an energetic ultraviolet photon splits an oxygen molecule (O2). These oxygen atoms quickly react with other oxygen molecules to form ozone. Most of the ozone production occurs in the upper atmosphere. The total mass of ozone produced per day over the globe is about 400 million metric tons! The global mass of ozone is relatively constant at about 3 billion metric tons, meaning the Sun produces about 12% of the ozone layer each day.

All the ozone in a given air parcel is destroyed many times over during the course of a single day when the parcel is in sunlight. Indeed, at an altitude of 30 km above the equator, the lifetime of an ozone molecule due only to UV photolysis is less than 1 hour. However, ozone is reformed in the parcel at almost exactly the same rate through the reaction between O and O2. Hence, ozone concentrations in the middle atmosphere change only very slowly over the long time scales (weeks to months) of production and loss.

However, while their sum is constant, these species are rapidly cycling back and forth, photochemically interconverting: all of the O3 is destroyed by UV photolysis every few minutes, leading to the formation of free O atoms, and all of the O atoms are immediately consumed in reactions with O2 to reform O3 in a fraction of a second.

Thus we can conclude that there are NO mitigation benefits for taxation limitations on Agriculture emissions. Indeed the solutions suggested by the SCNZ will have the opposite effect of INCREASING atmospheric warming.

Saturday, June 23, 2007

Icebergs sink lower productivity claims of Southern Ocean

Due to the lack of rivers and little glacial runoff, there are limitations for nutrient runoff in the waters of the Southern Ocean surrounding Antarctica. The various biogeochemical trace elements, that is necessary for productivity within the ecological Antarctic systems.

According to a new study in this week’s journal Science these floating islands of ice – some as large as a dozen miles across – are having a major impact on the ecology of the ocean around them, serving as “hotspots” for ocean life, with thriving communities of seabirds above and a web of phytoplankton, krill, and fish below.

The icebergs hold trapped terrestrial material, which they release far out at sea as they melt. The researchers discovered that this process produces a “halo effect” with significantly increased phytoplankton, krill and seabirds out to a radius of more than two miles around the icebergs. They may also play a surprising role in global climate change.

“One important consequence of the increased biological productivity is that free-floating icebergs can serve as a route for carbon dioxide drawdown and sequestration of particulate carbon as it sinks into the deep sea,” said oceanographer Ken Smith of the Monterey Bay Aquarium Research Institute (MBARI), first author and principal investigator for the research.

“While the melting of Antarctic ice shelves is contributing to rising sea levels and other climate change dynamics in complex ways, this additional role of removing carbon from the atmosphere may have implications for global climate models that need to be further studied,” added Smith.

“Phytoplankton around the icebergs was enriched with large diatom cells, known for their role in productive systems such as upwelling areas of the west coast of the U.S. or ice-edge communities in polar oceans. As diatoms are the preferred food for krill, we expect the changes in phytoplankton community composition to favor grazing as a key biological process involved in carbon sequestration around free-floating icebergs,” said oceanographer Maria Vernet from Scripps Institution of Oceanography at UC San Diego, one of the members of the research team.

This is of course important as the constraints on productivity is the limitation of the scarcest resource ie the Lieberg's Law of the Minimum: at any given instant, any metabolic process is limited by only one factor at a time; the nutrient in shortest supply relative to demand.

The Southern Hemisphere and temperature. What crisis?

I believe that the moment is near when by a procedure of active paranoiac thought, it will be possible to systematize confusion and contribute to the total discrediting of the world of reality.

Salvador Dali

It is now the post industrial information age. Each day we experience data transformed in both electronic visual and print mediums to our senses and perceptions. The data transcends both virtuality and reality. The convergence of reality and virtuality in News and entertainment, with science and controversy, with chaos and catastrophe, and the transformation of the delivery of data along the various modes of information have resulted in uncertainty and confusion.

Indeed how can it be expected to identify reality, when there are difficulties of distinction between reality and the unreal when the unreal is being realized, and the real being shown as unreal. Each day we experience a growing crisis of unrealized proportions .As Umberto Eco observed “crisis sells well” The question such crisis pose is whether attitudes have been undermined by the experience of modernity, or whether reality itself, something objective and firm, is an illusion .Is the paradigm now one of “there is no reality?” When the media, governments, and advertisers tell us that dreams are becoming realities, does this mean conversely, reality is becoming a dream?

Since the release of the IPCC “outlook we have been inundated with catastrophic and cataclysmic predictions of heat and extreme temperatures, rapid ice melt,and sea level rises.

If the reality of the data interpretation was correct we would expect to see more warm anomalies then usual!.

Using the temperature anomalies from the HADCRU dataset of the monthly anomalies (variance of the 150 year running mean) I plotted the anomalies as seen and show the linear trend for the Southern Hemisphere.

As the SH has more cooler anomalies then warm, there is no rational evidence of cataclysmic crisis observable, even using the HADCRU laundered datasets.

Friday, June 22, 2007

The Biosphere and the missing carbon sink.

In an interesting paper in Science published this week, we see the question arising on the missing carbon sink of the atmospheric interchange of co2.We have discussed this here previously.

The global transport of carbon (partly in the form of CO2) among the large reservoirs is called the global carbon cycle. Carbon dioxide emitted into the atmosphere together with the uptake by the terrestrial sinks and oceans governs the carbon dioxide content observed by the global sampling networks. Currently 40-60% of the anthropogenically released carbon dioxide remains in the atmosphere. Our current knowledge is ambiguous whether the rest of the CO2 is being detached by oceans or by terrestrial sinks (soil or vegetation) (Baldocchi et al., 1996).Indeed the missing carbon sink around 20% of the gcc is one of the unanswered questions for the IPCC.

The rhectoric of the sustainable carbon neutral society modification experiment is indeed just that. Propaganda from lobbyists and politicians who want us to lock up forests,or reforestation programmes that which will have adverse daisyworld climatic effects in the future. Indeed increased forestry in non-tropical climates such as NZ have the effect of decreasing the albdeo (reflection of longwave radiation) and INCREASING local temperatures!

In the terrestrial biosphere vegetation accounts for 20% of the carbon sink,the vadose zone the soils and detritus materials 80%.

Here any policies that impact on the biosphere-atmosphere need to account qualitatively for the adverse effects prior to any policy change.ie that will have any equal adverse response.

As Science reports

Forests in the United States and other northern mid- and upper-latitude regions are playing a smaller role in offsetting global warming than previously thought, according to a study appearing in Science this week. The study, which sheds light on the so-called missing carbon sink, concludes that intact tropical forests are removing an unexpectedly high proportion of carbon dioxide from the atmosphere, partially offsetting carbon entering the air through industrial emissions and deforestation.

The Science article, "Weak northern and strong tropical land carbon uptake from vertical profiles of atmospheric CO2," was written by an international team of scientists led by Britton Stephens of the National Center for Atmospheric Research (NCAR).

To study the global carbon cycle, Stephens and his colleagues analyzed air samples that had been collected by aircraft across the globe for decades but never before synthesized. The team found that some 40 percent of the carbon dioxide assumed to be absorbed by northern forests is instead taken up in the tropics.

Most of the computer models produced incorrect estimates because, in relying on ground-level measurements, they failed to accurately simulate the movement of carbon dioxide vertically in the atmosphere. The models tended to move too much carbon dioxide toward ground level in the summer, when growing trees and other plants take in the gas, and not enough carbon dioxide up in the winter. As a result, scientists believed that there was relatively less carbon in the air above mid-latitude and upper-latitude forests, presumably because trees and other plants were absorbing high amounts.

This questions both the quantification of the available carbon sinks for both Kyoto and carbon taxation mechanisms(credits and penalties) it still shows how the mainstream scientists from modelworld fail to include the microbial carbon sinks which are responsible for 80% of carbon sequestion

Saturday, June 16, 2007

Chaos, complexity, and the Cat map.

Chaos and Complexity theory studies nonlinear processes: Chaos explores how complexly interwoven patterns of behaviour can emerge out of relatively simply-to-describe nonlinear dynamics, while Complexity tries to understand how relatively simply-to-describe patterns can emerge out of complexly interwoven dynamics.

In mathematics, Arnold's cat map is a chaotic map from the torus into itself, named after Vladimir Arnold, who demonstrated its effects in the 1960s using an image of a cat. One of this map's features is that image being apparently randomized by the transformation but returning to its original state after a number of steps.

Chaos and Complexity emerge from non-linear mathematics. Theoretical and applied development of mathematical cybernetics and computer science made it possible for many mathematicians and physicists to step out of the framework of linearity, continuity and smoothness, and to approach problems belonging to the world of non-linearity, discontinuity and transformations.

With their pioneering works on local stability (instability) of dynamical systems in the last decade of the 19 century, the Russian mathematicians Andrey Lyapunov and Sophia Kovalevskaya are viewed as the founders of the single and most creative and prolific stand of thought in the analysis of dynamic discontinuities and non-linearities up to the present day, the Russian School. Significant successors to Lyapunov and Kovalevskaya include A. Andropov and L. Pontryagin (1937) who crucially advanced the theory of structural stability, A. Kolmagorov (1941) who developed foundation of the mathematical theory of turbulence (chaotic fluid dynamics) and V. Arnold (1968) with his most completely classification of mathematical singularities (catastrophes).

In 1962 Kolmagorov and Obukhov mathematically demonstrated possibility of an intermittency of chaotic fluid dynamics and emergence of patterns of order out of a turbulent flow.

The famous KAM theorem of Kolmagorov-Arnold-Moser (1978) threw light over the unresolved 3-body problem of Laplacean-Newtonian celestial mechanics - a problem firstly approached by the French mathematician Henri Poincaré (1890) who, facing its unsurmountable computational difficulty, saw the possibility of existence of a non-wandering (dynamically stable) solution of extreme complexity, and thus firstly predicted the existence of an attractor in chaotic dynamics. According to the KAM theorem, the trajectories studied in classical mechanics are neither completely regular nor completely irregular, but they depend very sensitively on the chosen initial states: tiny fluctuations can cause chaotic development.

The sensitive dependence of initial conditions as a true sign of chaotic dynamics was firstly studied mathematically by E. Lorenz (1963) in his three-equation model of atmospheric flow. A slight change in parameter value of the model had generated significantly different behaviour. Lorenz labelled this phenomenon as "butterfly effect". He found also that dynamic trajectories described by the equations moved very quickly along a branched, S-shaped, two-dimensional attractor with of a estrange form.

Self-organization behaviour can be exhibited by far-from-equilibrium chemical systems as it was shown by the Nobel-prize winner (1977) in chemistry Ilya Prigogine. According to the results of his studies, the inorganic chemical systems can exist in highly non-equilibrium conditions impregnated with a potential for emergence of self-organizing chemical structures. The more complex the aggregation of these structures, the stronger the tendency for macro-molecules to organize themselves.

There are a number of constraints.
1) Prediction and determinism are incompatible: we cannot predict long-term behaviour of complex systems, even if know their precise mathematical description.
2) Reducing does not simplify: interaction is important and interaction means inseparability.
3) Simple linear causality does not apply to Chaos and Complexity.
4) Complex dynamics give birth to forces of self-organisation.

The reason why we cannot say much about a complex dynamic system is because of its enormous sensitivity: even an infinitely small change in the starting conditions of a complex process can result in drastically different future developments

What does this all mean?.Simplistically if we cannot measure the initial values exactly,prediction is mathematically impossible.

Friday, June 15, 2007

Brown the new green in an ultraviolet world

As we have discussed here and here the changes and effects of UV flux is the precursor mechanism for the oscillations in CO2 absorption and emission from the biosphere and hence changes in the atmospheric levels.

The ability of biological species to adapt to adverse environments is one of the paradoxes of Ecological science.

How the exclusion of some “players” from the” marketplace” will allow for smaller players to dominate the market due to enhanced adaptability.

Changes to the ozone levels and UV penetration are cyclical over the solar cycles from the 27 day rotation, the 11 year cycle ,the Gleissberg cycle and longer orbital parameters.

Solar variability is observed on three main time scales: solar rotation (27-day), solar cycle (11year) and the Grand Minima time scale. The magnitude of the variability progressively increases from the short to long scales. Earth's climate responses are now found on all these scales. The most recognized are the responses to solar irradiance variations. These variations strongly depend on wavelength rising from 0.1% per solar cycle in total irradiance (mostly infrared-optical range) to 10% in UV and 100% per solar cycle in X-ray range. The variations in the total irradiance produce a small global effect. More substantial is the effect of solar UV variability on large-scale climate patterns. These patterns are naturally excited in the Earth's atmosphere as deviations (anomalies) from its mean state.

How does the distribution of UVB, UVA, and photosynthetically active radiation vary on sensitive surfaces within the biosphere in the agricultural and forest canopies over the growing season? Plants have widely varying sensitivity to solar UV radiation. This can result in shifts in the competitive advantage of one plant species over another and consequently composition and health of both manages ecosystems.

Daylength is the major environmental factor affecting the seasonal photosynthetic performance of Antarctic macroalgae. For example, the "season anticipation" strategy of large brown algae such as Ascoseira mirabilis and Desmarestia menziesii are based on the ability of their photosynthetic apparatus to make use of the available irradiance at increasing daylengths in late winter-spring. The seasonal development and allocation of biomass along the lamina of A. mirabilis are related to a differential physiological activity in the plant. Thus, intra-thallus differentiation in O2-based photosynthesis and carbon fixation represents a morpho-functional adaptation that optimizes conversion of radiant energy to primary productivity.

It is now known that various of the reproductive- and life history events in Antarctic macroalgae are seasonally determined: microscopic gametophytes and early stages of sporophytes in Desmarestia (Wiencke et al. 1991, 1995, 1996), Himantothallus (Wiencke & Clayton 1990) and P. antarcticus (Clayton & Wiencke 1990) grow under limited light conditions during winter, whereas growth of adult sporophytes is restricted to late winter-spring. Culture studies under simulated fluctuating Antarctic daylength demonstrated that macroalgae exhibit two different strategies to cope with the strong seasonality of the light regime in the Antarctic (Wiencke 1990a, 1990b). The so-called "season responders" are species with an opportunistic strategy growing only under optimal light conditions mainly in summer, whereas the "season anticipators", grow and reproduce in winter and spring.

By virtue of their fine morphology, have a high content of pigments per weight unit, a high photosynthetic efficiency, very low light requirements for photosynthesis, and they are better suited to dim light conditions than adult sporophytes. This strategy ensures the completion of the life-cycle under seasonally changing light conditions. Low light requirements for growing and photosynthesizing are developed to cope with Antarctic seasonality and constitute adaptations to expand depth zonation of macroalgae.

This suggests that the microalgae have adapted to predicting not only the early spring photosynthetically active radiation, but also high spring flux of UV due to ozone loss as seen by the levels of melanin pigmentation.

Envisat has captured the first images of Sargassum from space reports the ESA. The brown kelp famous in nautical lore for entangling ships in its dense floating vegetation, has been detected from space for the first time thanks to an instrument aboard ESA’s environmental satellite.

The discovery was made using the MERIS maximum chlorophyll index (MCI) which provides an assessment of the amount of chlorophyll in vegetation to produce detailed images of chlorophyll per unit area. MERIS is uniquely suited for this because it provides images of above-atmosphere spectral radiance in 15 bands, including three bands at wavelengths of 665, 681 and 709 nanometres in order to measure the fluorescence emission from chlorophyll a.

Chlorophyll is the green photosynthetic compound in plants that captures energy from sunlight necessary for photosynthesis. The amount of chlorophyll present in vegetation plays an important role in determining how healthy it is. Accurately monitoring chlorophyll from space, therefore, provides a valuable tool for modelling primary productivity.

"The 709 band used by MERIS is not present on other ocean-colour sensors. It was essential to our detecting Sargassum," Gower said. "The MCI index has allowed us to find so many interesting things, including Sargassum and Antarctic super blooms. It really gives us a new and unique view of the Earth."

In the arctic with similar radiation properties affecting the aquatic biosphere we see similar properties in the growth function of brown alga.

ABSTRACT. The effect of artificial ultraviolet (UV) and natural solar radiation on photosynthesis, respiration and growth was investigated in 14 red, green and brown macroalgal species on Spitsbergen (Norway) during summer 1998. In June, maximum mean solar radiation at sea level was 120 W m-2 of visible (370 to 695 nm) and 15 W m-' of UV radiation (300 to 370 nm), and decreased gradually until the end of the summer. In spite of incident irradiance, levels were low in comparison with other latitudes, and UV radiation stress on growth of Arctic macroalgae was evident. Transplantation experiments of plants from deeper to shallow waters showed, for most algae, an inhibitory effect of both UVA and UVB on growth, except in the intertidal species Fucus djstichus. The growth rate of selected n~acroalgaew as directly correlated to the variations in natural solar radiation during the summer. Underwater experiments both in situ and using UV-transparent incubators revealed a linear relationship between the depth distribution and the growth rate of the algae. In almost all species the photosynthetic oxygen production decreased after 2 h incubation in the laboratory under 38 pm01 m-' s-' photosynthetic active radiation (PAR 400 to 700 nm) supplemented with 8 W m-' UVA (320 to 400 nm) and 0.36 W m-' UVB (280 to 320 nm) compared to only PAR without UV. Like in the growth experiments. the only exception was the brown alga F. distichus, in which photosynthesis was not affected by UV. The degree of inhibition of photosynthesis showed a relation to the depth distribution, i.e. algae from deeper waters were more inhibited than species from shallow waters. In general, no inhibitory UV effect on respiratory oxygen consumption in all macroalgae studied was detected under the artificial radiation regimes described above, with the exception of the brown alga Desmarestia aculeata and the green alga Monostroma arcticum, both showing a significant stimulation of respiration after 2 h of UV exposure. The ecological relevance of the seasonal variations in the solar radiation and the optical characteristics of the water column with respect to the vertical zonation of the macroalgae is discussed.

Jose Aguilera et al MARINE ECOLOGY Vol. 191: 109-119, 1999

In summary we can conclude that high latitude species are adapted to changes in UV flux. That the response in species with elevated melanin pigmentation is more suited to early levels of available PAR where photosynthesis is present. In other species due to defensive response the populations do not significantly decrease ,however photosynthesis is not present as the organisms metabolize on dark respiration.

Friday, June 08, 2007

The architects of Ugly Buildings

In science you do not have to be polite, you only have to be right.
Winston Churchill

Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap.

So writes the "Thomas Huxley" of mathematics.

In the middle of the twentieth century it was attempted to divide physics and mathematics. The consequences turned out to be catastrophic. Whole generations of mathematicians grew up without knowing half of their science and, of course, in total ignorance of any other sciences. They first began teaching their ugly scholastic pseudo-mathematics to their students, then to schoolchildren (forgetting Hardy's warning that ugly mathematics has no permanent place under the Sun).

Since scholastic mathematics that is cut off from physics is fit neither for teaching nor for application in any other science, the result was the universal hate towards mathematicians - both on the part of the poor schoolchildren (some of whom in the meantime became ministers) and of the users.

The ugly building, built by undereducated mathematicians who were exhausted by their inferiority complex and who were unable to make themselves familiar with physics, reminds one of the rigorous axiomatic theory of odd numbers. Obviously, it is possible to create such a theory and make pupils admire the perfection and internal consistency of the resulting structure (in which, for example, the sum of an odd number of terms and the product of any number of factors are defined). From this sectarian point of view, even numbers could either be declared a heresy or, with passage of time, be introduced into the theory supplemented with a few "ideal" objects (in order to comply with the needs of physics and the real world)…….

….At this point a special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealisation is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be "absolutely" correct and are accepted as "axioms".

The sense of this "absoluteness" lies precisely in the fact that we allow ourselves to use these "facts" according to the rules of formal logic, in the process declaring as "theorems" all that we can derive from them.

It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result. Say, for this reason a reliable long-term weather forecast is impossible and will remain impossible, no matter how much we develop computers and devices which record initial conditions.
In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms.
The longer and fancier is the chain of deductions ("proofs"), the less reliable is the final result.

Complex models are rarely useful (unless for those writing their dissertations).

read on

Saturday, June 02, 2007

Russian Hydrogen and Nanotechnology investment fund will be world #1 energy organisation

As was signaled by the Russian Government last year at the G8 energy conference with its $500 million investment in research and development for the innovative technologies of Hydrogen fuel cells and nanotechnology the creation of the worlds largest energy investment fund moves the technology into cyber drive.

Russia’s tycoon Mikhail Prokhorov has created a new private investment fund based on his assets. The undertaking, Onexim Group, will have over $17 billion in assets based on Prokhorov’s stakes in Norilsk Nickel (22 percent), Interros (50 percent), Polus Zoloto (22 percent).

Once the Interros assets are split pari passu, Onexim Group will further widen by Prokhorov’s stakes in subsidiaries of the holding. The owners will divide 50/50 all assets of Interros, including Profmedia (100 percent), Otkrytye Investitsii (Open Investments; 58 percent), Silovye Mashiny (Power Machines; 30 percent), Norilsk Nickel (8 percent), Polus Zoloto (7 percent), Rosbank (69 percent), Soglasie Insurer (88 percent), RUSIA (26 percent), Plug Power (17.5 percent).

Onexim Group will focus on innovation projects related to traditional and hydrogen power engineering, nano-technologies and mining. The fund is targeted at investment projects with budgets starting from $1 billion.

Prokhorov has become Onexim Group’s president, and Dmitry Razumov has taken the GD office. Razumov was Norilsk Nickel’s deputy general director for takeover/merger strategy from 2000 to 2005.

”We stake on the projects, where Russia has objective competitive advantage. Our experience, analysis of the market and of development trends of Russia’s and world economies convince us that innovation and high-tech projects are the most promising,” Prokhorov said when presenting Onexim Group.

Norilsk Nickel the worlds largest nickel producer signaled this last year with investments in Anode manufacturers and technology manufacturers world wide.

One of the first Russian publications in the fuel cell field dating back to 1941 was about hydrogen/oxygen fuel cells. In the early 1960s, investigations on fuel cells were under way in different institutes. The subjects studied amongst others included hydrogen production, storage, transportation and dispensing. The team of Russian scientists and engineers from Kurchatov Institute of Hydrogen Energy and Plasma Technology, Kvant and other organisations worked on various Development programmes ranging from fundamental research to the manufacturing of 130 kW and 280 kW fuel cell generators and power plants for specific naval and terrestrial applications.

Alkaline is the most studied fuel cell technology in Russia and was employed by the Soviet and later by the Russian Space Corporation in the BURAN spacecraft to power electrical systems similar to NASA’s Apollo and space shuttle programmes. The unit consisted of four 10 kW Photon fuel cells.

In 2003 Norilsk Nickel signed an agreement with the Russian Academy of Science (RAS) to finance a three year fuel cell and hydrogen programme spending US$ 120 million over this period. The first part of the programme is to develop PEM and solid oxide fuel cells as well as reformers. Another side of the project to develop key infrastructure elements of the hydrogen economy like hydrogen production and storage devices as well as a delivery network. At the beginning of this project RAS Vice President Gennadiy Mesyats said that in the next four to five years the parties expect to have ‘commercial’5 -25 kW fuel cells as well as new technologies for
palladium catalysts.The latter is the main reason why the company is single-handedly investing a large sum of money into fuel cell research. As many may be aware, Norilsk Nickel is the largest palladium producer in the world, so provided this project is successful and the technology is licensed to other international companies the company will directly benefit from securing palladium demand.

The Energia Rocket and Space Corporation is famous for its work on Mir and Salyut orbital stations, as well as Soyuz aircraft. It designed and built the Russian fuel cell modules for the station. The company began PAFC design work in 1966 in preparation for a proposed Russian lunar landing. Since 1987 Energia has manufactured about 100 Photon fuel cell modules. In test programmes and during actual missions, these cells have accumulated a total of 80,000 hours of operational experience. In the early 1990s, it started research on fuel cells for submarine applications. In this field the company is working with the Central Construction Bureau of Navy Equipment Rubin and international partnerships are being considered especially with China and India. Energia has not conducted significant research on terrestrially based fuel cell systems, although it has signed an agreement with the US-based Power Technologies Corporation to commercialize fuel cells for stationary and mobile applications

The Keldysh Research Centre is the leading enterprise of the Russian Aerospace Agency in research and development on rocket engines and space-based power generation. The centre undertakes scientific research into rocket engineering and space propulsion; space rocket technologies, fuels and materials; development of advanced ignition systems and other technologies. In the fuel cell field, the centre is researching technology to develop fuel cells of up to 50 kW with a cost of less than US$500/kW. Some research was done on MEA’s base parameters in comparison to existing Nafion and MF-4SK. There is also some work being done on reformer development.

As can be seen from the selection of one of this Years global energy prize Laureates Doctor Thorsteinn Sigfusson (Iceland) the technology and the objective of Iceland to become the worlds first hydrogen economy.

Geothermal energy originates from geonuclear activity in the Earth’s core. In Iceland, this energy manifests itself on the Earth’s surface in the form of geysers. All told, geothermal energy provides the inhabitants of Iceland with more than 90% of their heating needs and accounts for more than 20% of the country’s electricity generation, greatly reducing pollution and dependence on imported fossil fuels. A paper presented to the German International Energy Conference in Essen last year argues that communities around the world can, like Iceland, benefit from linking their geothermal resources to the development of a hydrogen economy. According to the authors, Bragi Arnason and Thorsteinn Sigfusson from the University of Iceland, geothermal gas-vent sampling in Iceland shows that hydrogen gas makes it to the Earth’s surface in technically recoverable concentrations.The origin of this direct geothermal hydrogen is thought to be contact between magma (molten rock) and water in the Earth’s crust. In the Krafla geothermal field of northern Iceland, for example, some experimental vents release around 50 tonnes of hydrogen gas annually. If hydrogen were to be extracted from hydrogen sulphide, which is also released from such vents, the amount of recoverable hydrogen could double. In fact, Sigfusson has devised a method to convert this emission directly into hydrogen, with current laboratory testing set to be extended to pilot-plant scale soon. In addition to direct hydrogen extraction, geothermal energy can be used to power the electrolysis of water to provide hydrogen. This can be done using alkaline electrolysis, polymer-electrolyte-membrane electrolysis or high-temperature steam electrolysis.

With potential resource in NZ of around 3,610 mw for hydro and around the same for geothermal will we see these resources optimized?

The Diophantine approximations theory of Climate science

The Diophantine approximations theory appears in these problems because of the crucial influences of the resonances between the frequencies of the unperturbed problems on the perturbations evolution. One of the first observed manifestations of these resonances is the approximated commensurability of the years of Saturn and of Jupiter, whose periods ratio is approximately 5:2 (Jupiter’s angular motion is about 299” a day, that of Saturn - about 120”).
The Poincare averaging in the case of such resonance leads to the large “secular perturbation”,whose period is of order 10^3 years, but which is still periodic (like the pendulum oscillation), near the unperturbed motion. It leads to the evolution of the orbit in one direction during several centuries, which would destroy the Solar system, being continued forever.

V Arnold

That there is a causal connection between the observed variations in the forces of the Sun,the terrestrial magnetic field, and the meteorological elements has been the conclusion of every research into this subject for the past 50 years. The elucidation of exactly what the connection is and the scientific proof of it is to be classed among the most difficult problems presented in terrestrial physics. The evidence adduced in favor of this conclusion is on the whole of a cumulative kind, since the direct sequence of cause and effect is so far masked in the complex interaction of the many delicate forces in operation as to render its immediate measurement quite impossible in the present state of science.

F.H. Bigelow
US Dept. Agriculture Weather Bureau
Bulletin No.21, 1898

The complexities of meteorology and its “generalized pupil “Climate science” are often misrepresented as being able to understand and measure (model) the changes of differentials in thermodynamic equilibrium from an initial state to a predictive state.

The misrepresentation of scientists to predict the changes that will initiate catastrophic climatic events is far from reality, as the predictive capabilities are beyond the existing systems to predict the simplest differentials ie changes in convective thermodynamics over a short period.

In thermodynamics, the Gibbs free energy (IUPAC recommended name: Gibbs energy or Gibbs function) is a thermodynamic potential which measures the "useful" or process-initiating work obtainable from an isothermal, isobaric thermodynamic system. Technically, the Gibbs free energy is the maximum amount of non-expansion work which can be extracted from a closed system, and this maximum can be attained only in a completely reversible process. When a system changes from a well-defined initial state to a well-defined final state, the Gibbs free energy ΔG equals the work exchanged by the system with its surroundings, less the work of the pressure forces, during a reversible transformation of the system from the same initial state to the same final state.

Gibbs energy is also the chemical potential that is minimised when a system reaches equilibrium at constant pressure. As such, it is a convenient criterion of spontaineity for isobaric processes.

The Gibbs free energy, originally called available energy, was developed in the 1870s by the American mathematical physicist Willard Gibbs. In 1873, Gibbs defined what he called the “available energy” of a body as such:
“The greatest amount of mechanical work which can be obtained from a given quantity of a certain substance in a given initial state, without increasing its total volume or allowing heat to pass to or from external bodies, except such as at the close of the processes are left in their initial condition.

The initial state of the body, according to Gibbs, is supposed to be such that "the body can be made to pass from it to states of dissipated energy by reversible processes."

To understand the complexities of the simple change from an initial state to a subsequent state one can view the variables (parameters) that need to be assigned to the differential equations.

First level

Second level

Third level.

Simple is it not.

Friday, June 01, 2007

The hydrocarbon cycle, and uv radiation harbingers of life.

The classic experiment demonstrating the mechanisms by which inorganic elements could combine to form the precursors of organic chemicals was the 1950 experiment by Stanley Miller. He undertook experiments designed to find out how lightning--reproduced by repeated electric discharges--might have affected the primitive earth atmosphere. He discharged an electric spark into a mixture thought to resemble the primordial composition of the atmosphere.

As we saw here photo disassociation on Titan is from high energy electrons in the UV bands and cosmic radiation.

As we predicted, no lightning discharges were detected in the quiescent Titan atmosphere. Therefore, Titan's atmospheric chemistry is driven mainly by solar UV irradiation and not by electrical discharges. 4. The mixing ratios of the major gas phase species produced by UV photolysis of acetylene, as found experimentally: methylacetylene ; diacetylene ; divinyl ; and benzene were observed by the Cassini spacecraft in Titan's upper atmosphere, with an agreement within better than an order of magnitude.

Being released this week is the first results from a 10 year study and the comparisons of measurements from Cassini-Huygens.

Planetary scientists are a step closer to understanding the composition of the dust in Titan’s atmosphere. A decade-long programme of laboratory studies, aiming to reproduce Titan’s unique dust, or ‘aerosol’ population in specially constructed reactors, has proved invaluable.

Aerosols are small, solid particles that float in the air. On Earth, they are often the result of pollutants in the atmosphere. On Titan, they occur naturally and are abundant in the atmosphere, masking its surface.

Tholins are complex nitrogen-rich substances that form in the laboratory when ultraviolet radiation or electrons react with simpler molecules such as methane and ethane in a surrounding atmosphere of nitrogen. On Titan, the methane and nitrogen-rich atmosphere makes their formation easy and they drift to the surface where they continue to react with other atoms and molecules.

Faced with creating such alien molecules, the French team designed a special reaction chamber to simulate Titan’s atmosphere and produce the tholins for study. “We can generate over 200 chemical species,” says Patrice Coll, a team member at Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA), Paris, “We do not yet know the detailed pathways that build the chemicals, but we believe that are very similar to those on Titan.”

The aerosols govern what you can see on Titan. They create Titan’s hazy conditions, revealed by Huygens, and give the moon its dull orange glow. If you could stand on the surface of Titan and magically tune your eyes to infrared light, the haze and the clouds would seem to disappear and Saturn would loom large in the night sky. This is because the aerosols are largely invisible at infrared wavelengths. Change your eyes to ultraviolet, however, and you would be plunged into darkness because, at these wavelengths, the tholins behaves like a thick fog that absorbs all ultraviolet radiation falling on it.

The deposits form when solar ultraviolet radiation and charged particles react at high altitudes with Titan’s abundant methane to produce carbon- and hydrogen-bearing (hydrocarbon) molecules like ethane and acetylene, and more complex nitrogen-bearing molecules generally called tholins. These products drift down to the surface as aerosols much in the same way smog particles on Earth form and coat surfaces. On Titan however these deposits may accumulate to thicknesses of hundreds of metres deep.

Web Counters