Computers,
Networks and Education <1991>
The physicist Murray Gell-Mann
has remarked that education in the 20th century is like being
taken to the world's greatest restaurant and being fed the menu.
He meant that representations of ideas have replaced the ideas
themselves; students are taught superficially about great discoveries
instead of being helped to learn deeply for themselves.
In
the near future, all the representations that human beings have
invented will be instantly accessible anywhere in the world
on intimate, notebook-size computers.
But will we be able to get from the menu to the food? Or will
we no longer understand the difference between the two? Worse,
will we lose even the ability to read the menu and be satisfied
just to recognize that it is one?
There has always been confusion
between carriers and contents. Pianists know that music is not
in the piano. It begins inside human beings as special urges
to communicate feelings. But many children are forced to "take
piano" before their musical impulses develop; then they turn
away from music for life. The piano at its best can only be
an amplifier of existing feelings, bringing forth multiple notes
in harmony and polyphony that the unaided voice cannot produce.
The computer
is the greatest "piano" ever invented, for it is the master
carrier of representations of every kind. Now there is a rush
to have people, especially schoolchildren, "take computer."
Computers can amplify yearnings in ways even more profound than
can musical instruments. But if teachers do not nourish the
romance of learning and expressing, any external mandate for
a new "literacy" becomes as much a crushing burden as being
forced to perform Beethoven's sonatas while having no sense
of their beauty. Instant access to the world's information will
probably have an effect opposite to what is hoped: students
will become numb instead of enlightened.
In addition to the notion that
the mere presence of computers will improve learning, several
other misconceptions about learning often hinder modern education.
Stronger ideas need to replace them before any teaching aid,
be it a computer or pencil and paper, will be of most service.
One misconception might be called the fluid theory of education:
students are empty vessels that must be given knowledge drop
by drop from the full teacher-vessel. A related idea is that
education is a bitter pill that can be made palatable only by
sugarcoating a view that misses the deep joy brought
by learning itself.
Another mistaken view holds
that humans, like other animals, have to make do only with nature's
mental bricks, or innate ways of thinking, in the construction
of our minds. Equally worrisome is the naive idea that reality
is solely what the senses reveal. Finally, and perhaps most
misguided, is the view that the mind is unitary, that it has
a seamless "I"-ness.
Quite
the contrary. Minds are far from unitary: they consist of a
patchwork of different mentalities. Jerome
S. Bruner of New York University has suggested that we have
a number of ways to know and think about the world, including
doing, seeing and manipulating symbols. What is more, each of
us has to construct our own version of reality by main force,
literally to make ourselves. And we are quite capable of devising
new mental bricks, new ways of thinking, that can enormously
expand the understandings we can attain. The bricks we develop
become new technologies for thinking.
Many of the most valuable structures
devised from our newer bricks may require considerable effort
to acquire. Music, mathematics, science and human rights are
just a few of the systems of thought that must be built up layer
by layer and integrated. Although understanding or creating
such constructions is difficult, the need for struggle should
not be grounds for avoidance. Difficulty should be sought out,
as a spur to delving more deeply into an interesting area. An
educational system that tries to make everything easy and pleasurable
will prevent much important learning from happening.
It is also important to realize
that many systems of thought, particularly those in science,
are quite at odds with common sense. As the writer Susan Sontag
once said, "All understanding begins with our not accepting
the world as it appears." Most science, in fact, is quite literally
non-sense. This idea became strikingly obvious when such instruments
as the telescope and microscope revealed that the universe consists
of much that is outside the reach of our naive reality.
Humans are
predisposed by biology to live in the barbarism of the deep
past. Only by an effort of will and through use of our invented
representations can we bring ourselves into the present and
peek into the future. Our educational systems must find ways
to help children meet that challenge.
In the past few decades the
task before children before all of us has
become harder. Change has accelerated so rapidly that what one
generation learns in childhood no longer applies 20 years later
in adulthood. In other words, each generation must be able to
quickly learn new paradigms, or ways of viewing the world; the
old ways do not remain usable for long. Even scientists have
problems making such transitions. As Thomas S. Kuhn notes dryly
in The Structure of Scientific Revolutions, a paradigm shift
takes about 25 years to occur because the original defenders
have to die off.
Much of the learning that will
go on in the future will necessarily be concerned with complexity.
On one hand, humans strive to make the complex more simple;
categories in language and universal theories in science have
emerged from such efforts. On the other hand, we also need to
appreciate that many apparently simple situations are actually
complex, and we have to be able to view situations in their
larger contexts. For example, burning down parts of a rain forest
might be the most obvious way to get arable land, but the environmental
effects suggest that burning is not the best solution for humankind.
Up to now,
the contexts that give meaning and limitation to our various
knowledges have been all but invisible. To make contexts visible,
make them objects of discourse and make them explicitly reshapable
and inventable are strong aspirations very much in harmony with
the pressing needs and onrushing changes of our own time. It
is therefore the duty of a well-conceived environment for learning
to be contentious and even disturbing, seek contrasts rather
than absolutes, aim for quality over quantity and acknowledge
the need for will and effort. I do not think it goes too far
to say that these requirements are at odds with the prevailing
values in American life today.
If the music is not in the
"piano," to what use should media be put, in the classroom and
elsewhere? Part of the answer depends on knowing the pitfalls
of existing media.
It
is not what is in front of us that counts in our books, televisions
and computers but what gets into our heads and why we want to
learn it. Yet as Marshall H. McLuhan, the philosopher of communications,
has pointed out, the form is much of what does get into our
heads; we become what we behold. The form of the carrier of
information is not neutral; it both dictates context-free factoids,
often presented simply because they are recent. Two hundred
years ago the Federalist papers essays by James
Madison, Alexander Hamilton and John Jay arguing for ratification
of the U.S. Constitution were published in newspapers
in the 13 colonies. Fifty years later the telegraph and its
network shifted the goals of news from depth to currency, and
the newspapers changed in response. Approximately 100 years
after that, television started shifting the emphasis of news
from currency to visual immediacy.
Computers have the same drawbacks
as other media, and yet they also offer opportunities for counteracting
the inherent deficits. Where would the authors of the Constitution
publish the Federalist papers today? Not in a book; not enough
people read books. Not in newspapers; each essay is too long.
Not on the television; it cannot deal with thoughtful content.
On computer networks? Well, computer displays, though getting
better every year, are not good enough for reading extended
prose; the tendency is to show pictures, diagrams and short
"bumper sticker" sentences, because that is what displays do
well.
But the late 20th century provides
an interesting answer to the question: transmitting over computer
networks a simulation of the proposed structure and processes
of the new Constitution. The receivers not only could run the
model but also could change assumptions and even the model itself
to test the ideas. The model could be hyperlinked to the sources
of the design, such as the constitution of Virginia, so that
"readers" might readily compare the new ideas against the old.
(Hyperlinking extends
any document to include related information from many diverse
sources.) Now the receivers would have something stronger than
static essays. And feedback about the proposals again
by network could be timely and relevant.
...
Ten years from now, powerful,
intimate computers will become as ubiquitous as television and
will be connected to interlinked networks that span the globe
more comprehensively than telephones do today.
The
first benefit is great interactivity. Initially the computers
will be reactive, like a musical instrument, as they are today.
Soon they will take initiatives as well, behaving like a personal
assistant. Computers can be fitted to every sense. For instance,
there can be displays for vision; pointing devices and keyboards
for responding to gesture, speakers, piano-type keyboards and
microphones for sound even television cameras to recognize
and respond to the user's facial expressions. Some displays
will be worn as magic glasses and force-feedback gloves that
together create a virtual reality, putting the user inside the
computer to see and touch this new world. The surface of an
enzyme can be felt as it catalyzes a reaction between two amino
acids; relativistic distortions can be directly experienced
by turning the user into an electron traveling at close to the
speed of light.
A second value is the ability
of the computers to become any and all existing media, including
books and musical instruments. This feature means people will
be able (and now be required) to choose the kinds of media through
which they want to receive and communicate ideas. Constructions
such as texts, images, sounds and movies, which have been almost
intractable in conventional media, are now manipulatable by
word processors, desktop publishing, and illustrative and multimedia
systems.
Third, and more important,
information can be presented from many different perspectives.
Marvin L. Minsky of MIT likes to say that you do not understand
anything until you understand it in more than one way. Computers
can be programmed so that "facts" retrieved in one window on
a screen will automatically cause supporting and opposing arguments
to be retrieved in a halo of surrounding windows. An idea can
be shown in prose, as an image, viewed from the back and the
front, inside or out. Important concepts from many different
sources can be collected in one place.
Fourth, the heart of computing
is building a dynamic model of an idea through simulation. Computers
can go beyond static representations that can at best argue;
they can deliver sprightly simulations that portray and test
conflicting theories. The ability to "see" with these stronger
representations of the world will be as important an advance
as was the transition to language, mathematics and science from
images and common sense.
A fifth benefit is that computers
can be engineered to be reflective. The model-building capabilities
of the computer should enable mindlike processes to be built
and should allow designers to create flexible "agents." These
agents will take on their owner's goals, confer about strategies
(asking questions of users as well as answering their queries)
and, by reasoning, fabricate goals of their own.
Finally, pervasively networked
computers will soon become a universal
library, the age-old dream of those who love knowledge.
Resources now beyond individual means, such as supercomputers
for heavy-duty simulation, satellites and huge compilations
of data, will be potentially accessible to anyone.
For children, the enfranchising
effects of these benefits could be especially exciting. The
educator John Dewey noted that urban children in the 20th century
can participate only in the form, not the content, of most adult
activities; compare the understanding gained by a city girl
playing nurse with her doll to that gained by a girl caring
for a live calf on a farm. Computers are already helping children
to participate in content to some extent. How students from
preschool to graduate school use their computers is similar
to how computer professionals use theirs. They interact, simulate,
contrast and criticize, and they create knowledge to share with
others.
When massively interconnected,
intimate computers become commonplace, the relation of humans
to their information carriers will once again change qualitatively.
As ever more information becomes available, much of it conflicting,
the ability to critically assess the value and validity of many
different points of view and to recognize the contexts out of
which they arise will become increasingly crucial. This facility
has been extremely important since books became widely available,
but making comparisons has been quite difficult. Now comparing
should become easier, if people take advantage of the positive
values computers offer.
Computer designers can help
as well. Networked computer media will initially substitute
convenience for verisimilitude, and quantity and speed for exposition
and thoughtfulness. Yet well-designed systems can also retain
and expand on the profound ideas of the past, making available
revolutionary ways to think about the world. As Postman has
pointed out, what is required is a kind of guerilla warfare,
not to stamp out new media (or old) but to create a parallel
consciousness about media one that gently whispers
the debits and credits of any representation and points the
way to the "food."
For example, naive acceptance
of onscreen information can be combated by designs that automatically
gather both the requested information and instances in which
a displayed "fact" does not seem to hold.
An on-line library that retrieves
only what it is requested produces tunnel vision and misses
the point of libraries; by wandering in the stacks, people inevitably
find gems they did not know enough to seek. Software could easily
provide for browsing and other serendipitous ventures.
Today facts are often divorced
from their original context. This fragmentation can be countered
by programs that put separately retrieved ideas into sequences
that lead from one thought to the next. And the temptation to
"clay push," to create things or collect information by trial
and error, can be fought by organizational tools that help people
form goals for their searches. If computer users begin with
a strong image of what they want to accomplish, they can drive
m a fairly straightforward way through their Initial construction
and rely on subsequent passes to criticize, debug and change.
If the personally owned book
was one of the main shapers of the Renaissance notion of the
individual, then the pervasively networked computer of the future
should shape humans who are healthy skeptics from an early age.
Any argument can be tested against the arguments of others and
by appeal to simulation. Philip Morrison, a learned physicist,
has a fine vision of a skeptical world: ". . . genuine trust
implies the opportunity of checking wherever it may be wanted....
That is why it is the evidence, the experience itself and the
argument that gives it order, that we need to share with one
another, and not just the unsupported final claim."
I have no doubt that as pervasively
networked intimate computers become common, many of us will
enlarge our points of view. When enough people change, modern
culture will once again be transformed, as it was during the
Renaissance. But given the current state of educational values,
I fear that, just as in the 1500s, great numbers of people will
not avail themselves of the opportunity for growth and will
be left behind. Can society afford to let that happen again?
|