PART ONE: MATERIALS
182
Apart from the obvious office use of computers for anything from preparing
wage statements to scheduling orders so that they go to the works in the most
economic way, computers are now being used ‘on-line’, to control actual
production processes. A simple example comes from the cutting of billets to
suit customers’ orders. Billets of special and very expensive alloy steel have to
be cut into many different lengths to suit individual customers. The billets do
not come from the mill in exact lengths, and the practice was always for the
operator who controlled the billet saw to be given the measured length of each
billet as it came to him. He then had to decide the best way to cut it up,
according to all the orders he had to satisfy, and set the saw accordingly, so
that as little as possible of the expensive material was wasted. However skilled
the operator, human error was unavoidable.
With computer control the billets are measured automatically as they are
rolled and this figure is fed into the computer with details of the customers’
orders. It equates these two sets of information and produces figures for each
billet which will use it most economically. There are numerous other examples
of computer control in steelmaking and processing, and the trend is increasing.
In special and alloy steelmaking there is also a great deal of mechanization
and process control is highly instrumented, but the scale of operations is
smaller and the range of products wider. Today all alloy and special steels are
made in electric furnaces. The old crucible process is extinct, though there is a
crucible melting shop preserved in working order at Abbeydale Museum,
Sheffield. Electric furnaces can be of two types, arc or induction. In the arc
furnace heat is generated by means of an electric arc and metal is melted by
this heat in a refractory-lined vessel of drum shape, which can be tilted
mechanically to tap the finished steel (see Figure 2.6). Arc furnaces are now of
many sizes, holding from a few tonnes to 150 tonnes or more. The induction
furnace was invented in Italy in 1877 but the time was not ripe for it to
develop and the first one in Britain was installed in 1927. In this furnace there
is no arc; an electric current induces a secondary one inside the furnace itself
and generates sufficient heat to melt steel. Induction furnaces are generally
used for the very special and expensive alloy steels and may range in capacity
from a few kilogrammes to 5 tonnes or more.
Neither type of electric furnace uses liquid, solid or gas fuel, so the steels
made in them cannot be contaminated from these sources, an important
consideration in the higher-grade steels. Any contaminant—even in a very small
percentage—might be disastrous in a gas-turbine disc, for example, so a very
‘clean’ furnace is highly desirable.
Unfortunately, some gases also act as contaminants in steel, and since these
gases are present in air it is impossible to avoid their getting into the steel.
They must then be removed and there are several ways of doing so.
One is vacuum melting. Steel is made as carefully as possible in an ordinary
electric furnace and then remelted in an induction furnace which is contained
FERROUS METALS
183
in a sealed chamber under a vacuum. As the steel melts, the gases are given off
and drawn away by the vacuum pumps. Vacuum furnaces are complicated
pieces of machinery, with equipment for taking samples of molten steel,
making corrective additions to the melt, and casting the steel into an ingot, all
without disturbing the vacuum. They are naturally expensive to buy and they
have small outputs, but they are justified when very high quality is essential.
Figure 2.6: Model of an electric-arc furnace.
PART ONE: MATERIALS
184
Among other methods of cleaning and purifying high-grade steels, is
electroslag refining, which is of growing importance. In this process a long bar
of steel is made and the end of it is gradually melted off under a pool of
molten slag. The slag is specially prepared so that the impurities pass into it; in
effect they are ‘washed’ out of the steel.
Iron and steelmaking today is a pattern of highly sophisticated and very
expensive plant and, for the bulk producers, of very large outputs coming from
a few works in coastal areas. The pattern is world-wide; almost every major
industrial country is concentrating its bulk steelworks in this way.
There are, however, two exceptions to this trend towards ever-increasing
size. In the first place the alloy and special steelworks have not followed the
pattern and are not likely to. A new giant steelworks could cost £1000 million
or more to build today, a cost which could only be justified by the very large
outputs achieved for tonnage steels. Alloy steels are not wanted in such great
quantities, so the alloy steelworks, though bigger and costing considerably
more than they used to, will never reach giant size.
In the second place, there is a new type of steelworks, not at all large by
modern standards, which is proving very successful. This is the so-called
minimill, made economically possible by the electric furnace and continuous
casting. There are several in Europe and quite a number in the United States
and Japan, where the giant works is often thought to be supreme. One has
been built in Britain and others are under construction or planned.
A mini-mill is a steelworks based on scrap steel, collected over a fairly small
area locally, melted in an electric-arc furnace, cast in a continuous casting
machine and then rolled in a mechanized mill. Its products will be few in
number; sometimes there will be only one finished product. Concrete
reinforcing bars are a typical mini-mill product. The output can range from as
little as 50,000 up to about 400,000 or more tonnes a year.
The size of a mini-mill is determined by that of the local market and that of
the area from which the raw material, scrap, is collected. In the USA, with its
great land mass, a market can be a long way—and in terms of transport an
expensive way—from the traditional steel-producing areas, so a locally-based
works can do well. But even in Japan and Britain, mini-mills can prosper if
they get their economics right. The first British mini-mill, at Sheerness in Kent,
was designed for expansion from an annual capacity of 180,000 tonnes to
400,000 tonnes.
Discounting the possibility that steel may in the foreseeable future be
superseded by some other material, it is difficult to imagine any major changes
in the metal itself. Specifications will change, new alloys will be developed for
new, unforeseeable applications, but steels will still be recognizable as such.
It is in the methods of manufacture that the greatest changes are likely. The
blast furnace could be the first to face this effect. At present it is the most
economic means for converting iron ore to metal. But coke is getting more
FERROUS METALS
185
expensive and scarcer: not every type of coal is suitable for making coke and
the world’s reserves of those that are are dwindling. There are alternative
methods of reducing iron ore. Some modern plants using oil or natural gas as
fuel can produce relatively pure iron pellets which are suitable for melting
down to make steel. It is also possible to smelt electrically. In a few parts of the
world, where coke is too expensive, they are already in use: in Mexico, for
example, where there is iron ore, but coke would have to come from the USA
and the transport charges alone would make it very expensive.
Perhaps the ultimate dream of the steelmaker is the fully-continuous
production of steel. Raw materials would come in at one end of the works,
flow through the various processes in a continuous line and come out at the
other end as finished products. Parts of the production line are already
continuous, but there are major technical problems to be solved before truly
continuous steelmaking is practicable.
But there are quite a lot of people in many countries, trying to fill the gaps
between theory and practice and nobody can predict what might happen, or
when. The one certainty is that we have not yet heard the last of steel or of its
basis, the element iron.
186
3
THE CHEMICAL AND
ALLIED INDUSTRIES
LANCE DAY
INTRODUCTION
The chemical industry is that by which various kinds of matter are transformed
into other kinds that are needed in manufacturing or in everyday life. Its history
falls into two periods. The first, the pre-scientific stage, stretches back into the
distant past, to man’s earliest attempts to deal with materials. The second,
scientific, period is of quite recent origin, in the late eighteenth century, when
chemical science began to be usefully applied to chemical technology.
In the earliest period up to the end of the neolithic age, that is around 3000
BC, practice of the chemical arts was restricted almost entirely to the making
of fire, alcoholic fermentation and the baking of pottery. This was succeeded in
various parts of the world by what are termed the ancient civilizations, such as
those of Egypt, Mesopotamia, the Indus Valley and, somewhat later, China.
Here city life began, communication, above all writing, developed to make it
possible to keep records and disseminate knowledge, and new techniques were
developed. Some of these, such as medicine, surveying and astronomical
observation, were carried out systematically. At the same time the range of
materials available and the processes by which they were treated widened, with
the effect of improving the way of life for the citizens. Most important of these
materials were the metals, gold and silver, copper and tin, alloyed in bronze,
and later iron (see Chapters 1 and 2).
Throughout this period, and in the succeeding ages of Greece and Rome, the
rise of Islam and on into Western Europe, the extraction and preparation of useful
substances was essentially a craft, carried on, like any other, by skilled artisans
who learned their trade through apprenticeship and experience and not from a
THE CHEMICAL AND ALLIED INDUSTRIES
187
corpus of literature; there was none, and if there had been, very likely they were
unable to read. Such accounts as there are of the ancient chemical arts were drawn
up by those not engaged in the craft. For example, the encyclopaedic Historia
naturalis of the Roman official Pliny has embedded in it many descriptions of
chemical processes, some accurate, some less so, for they are based on secondhand
reports rather than original observation. Many recipe books survive to give us
some idea of what went on, like the chemical tablets of seventh-century BC
Assyria, although these are no more than lists of ingredients.
With the coming of printing and a mercantile and practical class of reader, a
demand developed for clear accounts of the making of various substances. These
begin to appear early in the sixteenth century, some being fine examples of the
art of book-making. Hieronymus Braunschweig’s books on distilling were
printed early in the 1500s and were the first to include illustrations of chemical
apparatus. Neri’s sober account of glass-making followed and there were the
metallurgical treatises of Vannoccio Biringuccio (1540), Agricola (Georg Bauer)
(1556) and Lazarus Ercker (1574). This literature is severely practical and shows
little trace either of magical or superstitious elements on the one hand or, on the
other, of the current philosophical ideas about the nature of matter. It was the
Greek philosophers of the sixth century BC onwards who began to seek an
underlying unity in the variety of materials in nature and a few fundamental
principles or elements from which this variety could be derived. The explanation
that gained widest acceptance was the four-element theory propounded by
Aristotle and his followers from the fourth century BC, which held that all
materials consisted of varying proportions of the elements fire, air, water and
earth. This theory was not seriously criticized until the seventeenth century and
fell into disuse during the following century, surviving today only in such
phrases as ‘the fury of the elements’. This and certain other ideas about the
nature of matter and the ways it could undergo change were applied by the
alchemists in the course of their work attempting to make gold.
The artisan did not think in philosophical terms because he had not been
educated in the schools, and if he had been, it would not have been the slightest
help in his craft of making useful materials. The lack of a genuine theoretical
understanding was a great handicap particularly in identifying and valuating
materials. The glass-maker, for example, did not know that silica, sodium
carbonate and lime as such were needed to make glass; the first glass-maker
discovered by accident, and his followers knew from experience, that sand
melted with the ashes of certain plants would yield glass. They knew that the
whiter the sand, the more colourless the glass. Likewise, it had been found from
experience that the ash from some maritime plants produced the best glass, but
not because they understood that these contained soda, potash and lime and that
these were necessary for glass making. Materials were recognized by their look
and feel, learned from those who already knew. Knowledge of materials and
processes tended to be kept within rather closed communities and not widely
PART ONE: MATERIALS
188
disseminated. Communications were difficult enough without the deliberate
secrecy that was sometimes practised, as when the earliest Venetian glassmakers
sought, albeit vainly, to prevent a knowledge of their art from spreading.
Lacking a means of identifying substances correctly, the early chemists
could be so confused about them as sometimes to use the same name for
different substances, such as ‘nitrum’, which could mean both sodium
carbonate and potassium nitrate. On the other hand, different names could
unwittingly be used for the same substance: ‘vitriolated tartar’ and ‘vitriolated
nitre’ were both used at times for more or less impure potassium sulphate,
apparently with no awareness that the substances so designated were
essentially the same.
Inability to identify materials made it impossible to evaluate them, that is, to
determine how much of them was present. Finding out what and how much is
the object of chemical analysis and not surprisingly it arose in connection with
those materials that could be recognized, namely the metals, particularly gold
and silver. The printed books on the assay of metals which began to appear
early in the sixteenth century are evidence of a practical tradition in the
quantitative evaluation of gold and silver. But for other materials it was hit or
miss. The ironworker, for example, had no way of knowing whether he had
extracted all the iron from a charge of ore. Very likely more than half would
have been left in the refuse or slag, so that later workers often found it worth
while to rework them. As for determining the quality of the product, if the
customer was satisfied, that was enough; there was no other criterion.
Not understanding what was going on, the artisan found it difficult to
regulate his processes and distinguish significant from irrelevant factors.
Adding a new ingredient one day, or giving the mixture a good stir, might
appear to have improved the result and the new procedure would have passed
into timehonoured practice until somebody accidentally omitted it without
adverse effect. Nobody would have known or even asked why the new
procedure seemed to work. There was thus no understanding of the effects of
temperature, pressure and all the other conditions that are now known to
influence chemical changes. Temperature was in any case difficult to control.
The mainly charcoal-fired furnaces were awkward to regulate and things could
easily get out of hand, as illustrated by the explosive mishaps that befell the
alchemists of Chaucer and Ben Jonson. The poor quality of many of the
reaction vessels was also a hindrance and led to much waste. Considering all
the handicaps, it is indeed remarkable that such a range of useful materials was
produced with an acceptably high quality. It was all achieved by craftsmen
relying on skill of eye and hand gained through years of practice and inherited
from generations of work in the industries concerned.
All this was to change dramatically within a relatively short space of time. The
idea of increasing natural knowledge gained from observation and applying it to
industry or the useful arts developed during the seventeenth century. The Fellows
THE CHEMICAL AND ALLIED INDUSTRIES
189
of the Royal Society, from its foundation in 1662, took a considerable interest in
industry and made some useful suggestions for improvements in chemical
processes. One of the founder members, the Hon. Robert Boyle, sharply criticized
the prevailing ideas in chemistry and urged that it shed its disreputable alchemical
connection and apply the concepts of the new mechanical philosophy. This
criticism lacked precision, however, and a further century was to elapse before a
chemical theory was established which actually corresponded with reality, at the
hands of Antoine-Laurent Lavoisier from around 1780. The processes involving
oxygen, such as combustion, were correctly explained, the nature of acids, bases
and salts was put on a sounder footing, and in particular a clear definition of a
chemical element was not only stated but usefully applied to draw up the first list
of elements in the modern sense. A beginning was made in chemical analysis and
after 1800 great improvements were made in quantitative analysis. Soon after
1800 rules for the way in which elements combined to form compounds were first
enunciated, and with the atomic theory of John Dalton, chemists could visualize
and explain chemical reactions in terms of the ultimate particles forming the basis
of all matter.
Early beneficiaries of the chemical revolution were the manufacturers of
cheap sulphuric acid, caustic soda, and chlorine for the textile industry.
Developments in the industry gathered pace, informed by discoveries on the
theoretical side. The nineteenth century was the era of pure and applied
chemistry. The pure chemist was concerned to advance chemical knowledge for
its own sake, irrespective of its possible practical use. The applied chemist,
meanwhile, was employed to improve the processes for producing commercially
useful substances, seeking new exploitable materials and, above all, in chemical
analysis to monitor processes and the quality of products. Too often the two
kinds of chemist worked in isolation from each other, the former being blissfully
unaware of the needs of industry and the latter prevented from carrying out
research that did not show an obvious profit. This division of role has, however,
become increasingly blurred with the growth from the beginning of this century
of the great chemical firms: indeed, the terms ‘pure’ and ‘applied’ chemistry can
be said to belong to a bygone age. Improved contacts between the universities
and industry make the former’s research departments more aware of problems in
industry, while much research in industry is in areas wider than those for which
there is an immediate cash return. In addition, in most industrialized countries
the state sponsors research and itself carries it out, in government laboratories,
and without rigidly restricting its attention to problems of public concern. It is a
melancholy fact that, in Britain, state, industry and the universities combined to
deal with common needs never so effectively as in the two world wars. The
production of the first atomic bomb is the prime example of such co-operation
on an international scale.
By and large the chemical industry in the developed countries has been in
the hands of private commercial firms and, however altruistic some of their
PART ONE: MATERIALS
190
activities may be at times, the ultimate reason for a process or product to be
developed is that it will make a profit. It is worth noting that this profit is the
source of funds for research by the state and the universities whether by direct
sponsorship or indirectly through taxes. Because of the successful and
systematic application of theoretical chemistry, first in inorganic then in
organic chemistry and physical chemistry, especially the mechanism of
reactions, the range of substances which the chemical industry has produced
for man’s use, with ever-improving quality, has been truly remarkable. The
comparison with several millennia of near stagnation makes the progress of the
last two centuries all the more striking. In 1800 the chemical industry was
important, but on a small scale, its products limited to metals, acids, alkalis,
pigments, tan-stuffs, medicines and a few other chemicals, some made on a
scale not much greater than in the laboratory. Now the scale is vast, yet the
industrial chemist exercises a precise control over the processes to yield an
exactly predictable result. The source of this progress has been research.
Sometimes progress has come by directing research to solving a particular
problem, such as making a substance with certain required properties. But the
more fruitful source has been to apply discoveries not made with a particular
practical end in view. An example of the first is presented by Alfred Nobel and
his intention to make nitro-glycerine a safe explosive. In the course of this he
invented dynamite and blasting gelatine (see p. 223). But the more remarkable
discoveries have been those that were not intended. Thus Perkin, while trying
to synthesize quinine lighted on something quite unexpected, the first aniline
dye, mauve—which led to a whole new industry (see p. 201). An example of
the deliberate application of the results of pure research can be seen in the
hydrogenation of oils to make fats like margarine, stemming from the study of
the catalytic hydrogenation of unsaturated compounds in the presence of a
metallic catalyst by Sabatier and his colleagues around 1900. Until then, the
production of margarine, invented in 1869 by the French chemist Hippolyte
Mège Mouriès, had been limited by the availability of raw materials, but the
hydrogenation process enabled almost unlimited quantities of oils such as
cottonseed oil to be converted into solid fats.
This chapter follows the history of the making of the more important
substances or groups of substances that are a help to man, in one way or
another, in his everyday life.
POTTERY, CERAMICS, GLASS
Pottery and ceramics
The hand-forming of plastic clay and changing it by heating into a hard body
impermeable to water is a technique that goes back to the dawn of civilization,
THE CHEMICAL AND ALLIED INDUSTRIES
191
that is, to before 6000 BC. Indeed, primitive man, whether in prehistoric times
or the present, fashions clay by hollowing out a ball and leaving it to dry in the
sun or heating it on an open fire. Such simple means can hardly be classed as
even primitive industrial chemistry, but with the rise of the ancient civilizations
and the settled urban life that made tolerable the fragile nature of pottery, a
number of materials began to be used for a variety of decorative effects. Potters
learned, too, to control the temperature of their kilns to produce different
colours. In modern parlance they employed reducing and oxidizing conditions to
achieve various effects, without of course understanding the reason for this.
Clays are, chemically speaking, hydrated aluminium silicates with other
substances such as alkalis, alkaline earths and iron oxide. It is this last that
gives the commonest clay its characteristic red colour. The clays commonly
found in nature are plastic when mixed with water and can be formed into a
variety of shapes. When left to dry until the water content is 8–15 per cent the
clay can still be worked, by scraping or turning, but lacks mechanical strength.
After further drying and firing at 450–750°C the chemically-combined water is
driven off, the clay can no longer combine with water, and it becomes like
moderately hard stone. Firing at a higher temperature eventually causes the
clay to vitrify and fuse, but that stage was rarely reached in the ancient world.
It is impossible to date the technical advances made during the early
civilizations of Egypt, Mesopotamia and the Indus, but the art of throwing pots
on the potter’s wheel evolved at this time, as also the firing of the ware in kilns,
fuelled with wood or charcoal, in place of the open fire. Temperatures of just
over 1000°C could occasionally be reached and much greater control of the
draught and therefore the heating conditions was achieved. Pottery could be
rendered sufficiently non-porous by burnishing, that is, smoothing the unbaked
surface by rubbing, but a better surface could be obtained by dipping the ware
in a ‘slip’ or a slurry of fine clay and firing, or by glazing, that is, painting on to
the surface a substance which on firing would turn into a thin layer of glass.
The Egyptian blue glaze was a notable example. It was made from white sand,
natron, limestone and a copper compound, perhaps malachite, which imparted
a blue colour to the mixture. This was heated for two days at around 900°C,
powdered and applied as a glaze to a siliceous body.
The Assyrians, about 700 BC, introduced lead oxide-based glazes, an
important development as this was the first glaze that would adhere to a clay
base. They were able to obtain a yellow colour by roasting antimony sulphide
with lead oxide, and blue and red from copper compounds. The Greeks and
Romans made progress in fine workmanship and artistic design rather than in
technology. The Greeks, from about 600 BC, did however develop the
technique of black and red ware achieved by using reducing and oxidizing
conditions to produce two different states of the iron oxide in the red clay.
The most interesting development over the next millennium was that of
lustre ware. A paste formed from powdered sulphides of copper and silver was