Touch and the Observer's Vantage Point
John M. Kennedy
University of Toronto
On a clear day, when you seem to see forever as you stand spellbound before a vista of distant mountains, you have an impression of space, but you also have a well-defined vantage point. The vista specifies your own unique location (Gibson, 1979). If you take a photo that day, the photo tells where you were standing. It says "you were here!" The contours of the hills not only reveal where they are, silhouetted against the sky, they also indicate the special spot from which the photo was taken. If you are on a mountain track, and you move to one side, to take more pictures, the shapes of the brows of the hills will change slightly to specify your trajectory. Parts of distant hills evident in one picture may be hidden in a shot from a neighbouring viewpoint along the track. If the vista opens out to the ocean in one direction, you may see as far as the horizon.
Visually, the dimension of distance has as one anchor the distal object.
The other anchor is the observer's vantage point: no vantage point, no distance.
Acting as a far target for observation, the horizon often anchors one end of the
dimension of distance. It is a visual limit for a terrestrial plane. The other
end is the observer's vantage point. It is the origin for measures of the
distance of the target from the eye. The origin is the centre of a sphere of directions. From the origin we
can move our gaze in six ways . We can change our heading via yaw, pitch and
roll, and we can move our origin up, sideways and forward. That is, not
only do we have to look into space
from our own limited standpoint, we also have to gaze in a particular direction
at any given moment. We look up (changing pitch) to the heavens above the
horizon, or down to the ground. We can look
left and right (yaw) to where our path may take us, perhaps along a cliff edge
of a plateau. We can revolve to stand on our head (roll). Also, our origin can
be moved left and right along the path as if on a moving belt, be raised or
lowered as if on an elevator, or tempt fate by allowing itself to go to the
front and towards the edge of the bluffs or to the back and safely away from the
precipice. There are three ways to change our direction of gaze from a fixed
origin, and three ways to move the origin: Six degrees of freedom for our
singular vantage point.
The observer's vantage point is evident in vision and it is made known
precisely and exactly in pictures based on optic projection. It has six degrees
of freedom we take liberties with daily. Is there anything like the eye's
vantage point in touch? Or does touch depend entirely on direct contact, that is
on stimuli in proximity to the body? Does touch rely so much on proximal arrays
that it resists any use of a vantage point? Is there any way in which touch
enquires about distant objects, far removed from the observer, not abutting the
body? In what ways might touch act as a distal sense as well as a proximal
sense? Does touch have degrees of freedom? Does it use contours of objects, like
the brows of hills, to specify its location? How might a
change in tactual location be
reflected in a tactual vantage point? Where is the tactual sky and the horizon?
Visual impressions of a distinct, precise vantage point are well matched
by the photos we take of the attractive vista. The photo is a record of the
light rays coming to a single point, through a lens. If we made pictures for
touch, using displays with raised elements (Edman, 1992) would touch provide us
with information about a particular vantage point? Some might think the pictures
would do so only
for the sighted, who can imagine what the tactile display could look
like, and interpret the picture bearing in mind a visual vantage point (Revesz,
1950). Surely there would be a lot to learn
about vantage points if a blind person unfamiliar with pictures were able
to use a picture drawn from a single vantage point.
I will discuss the questions here in an argument about dimensions of distance in touch. Fundamentally, I will propose that vantage points abound in touch. And I will contend they can be used usefully in tactile pictures, and be interpretable through touch in similar ways by the sighted and the blind.
Let us begin with a thought experiment. Imagine touching or looking at a line of raised dots (Arnheim, 1974; Kennedy, 1993, 1997; Holmes et al, 1998). The dots are elements that induce a perceived line. The perceived line crosses the empty space between the dots. Much of our perception of space is like this. We see or touch a few objects on a surface and gain an impression of the relations between them. We also get an impression of the relations between the inducers, the induced line and our own vantage point. The vagaries of these relations are the topic to be discussed here.
The historical legacy: Molyneux's question and Murphy's Law
Many scholars have asked about the relation between vision and touch.
Vision's vantage point was often plain, and often mistakenly assumed to be
entirely obvious, in these discussions. Alas, the idea that touch might have use
for vantage points was often conspicuously absent. The result was, I think, a
very lop-sided debate.
William Molyneux , an Irish barrister of the late seventeenth century,
had a wife who was blind, and was moved to throw a celebrated question into the
pool of philosophical debate. Query:
would a blind person familiar with cubes and spheres
be able to recognize them if given sight by some enterprising operation?
Many eminent scholars who did not know the answer rushed to reply. The character
of their arguments set the tone of
enquiry for many years, with vision given abilities explicitly, and touch
belittled by errors of omission. In a variation of Murphy's Law that what can go
wrong will, parallels that might be misconstrued were.
John
Locke ( 1690, see also Boring, 1942, 1950) in an "Essay Concerning Human
Understanding" commented that
vision gives us more than light and colours. It also gives us the "far
different ideas of space, figures and motion". Locke discussed projection
to a vantage point. He noted that a sphere is projected in vision as a circle
and a cube as a square or hexagon. He described
these projections in terms of vision. He did not
consider projection in touch.
Reid (in 1764, see Boring,
1942, 1950) astutely described how vision sees changeable aspects of an object
in its projections. But in touch, he thought, we gain the impression that all
these objects are identical. Vision, Reid wrote, initially takes a sphere as a
circular form, variously coloured if it is partly in light and partly in shadow.
But the genius of perceptual learning is that aided by touch we can discover
that different distances are relevant, not just various colourings, and
"this perception" gives the circular form convexity, adding a third
dimension (VI, section 23).
Synge (in 1693, see Eriksson, 1998) debated the basis for Molyneux's
question, asking what a person born blind might have as an idea of a sphere or a
cube. A tactile idea of a sphere,
Synge proposed, was of an object that felt the same all over. In contrast, a
cube has distinct parts. Some are sharp vertices, some are flat, and some are
long straight corners between flat areas.
Berkeley (in 1709, see Eriksson, 1998) noted when we look at a point, the
point will not tell us whether it comes from a short distance or a long
distance. Its distance is indeterminate.
Locke, Reid , Synge and Berkeley do not offer systematic conjectures
about touch having vantage points, dealing with projections and anchoring
distance information. Locke's spheres and cubes project shapes in vision, and
not in touch. But the direction of parts of objects vary just as much in touch
as they do in vision. Reid failed to mention that cubes have different aspects
to touch. Synge's spheres always feel the same. He does not mention there are
many ways one might vary the vantage points from which contact is made. When
Synge makes the point about distinct parts of a cube, one wants in vain to have
an orderly treatment of the fact that some of the parts of the cube could be
"near" and others "far". Berkeley's visual point on
our retina could be compared usefully to a tactile point on our skin. The visual
point does not tell us how far it's straight-line transmission has come. Thar
requires a specific informative context for the point (Gibson, 1979). The
tactile point does not tell us how long a rod is behind it. Wielding a rod does
tell us about its length ( Turvey, 1995).
Diderot (in 1749, see Morgan 1977), French encyclopedist, is a radical in this company of British Empiricists. Ironically, he offered more empirical observations than the Empiricists. His observations make notions of a tactile vantage point relevant. He discussed the abilities of two blind men -- a man from Puiseau, and a mathematician from Cambridge. Both of these men dealt with shape and distance. They appreciated that we reach out for objects, from wherever we are standing. The man from Puiseau spoke of reaching out with his stick. Sometimes an obstacle might block his access to the object he was trying to touch with his stick. The outcome is a valid awareness of spatial properties, the body as origin, several objects around it, some near and some far, an outstretched arm, extended by a stick, and occlusion of one object by another.
Is
there direction in touch? A static stick does not have direction, but, as
Diderot mentioned, a motion is in a particular direction. What about shape? Our
hands pass through a succession of places, in following a string, Diderot wrote.
If the string is taut, it provides a succession of points or places that
can be combined, using memory, into a straight line. If it is slack, the
combination that will result will be a curve. We can recall the shapes and refer
to the properties we discover through touch , across
a succession of points, he conjectured.
Diderot's discussion lead him into fierce contradictions. We combine
tactile points, using memory, and can later "refer" to
the products, Diderot believed. But he went on to argue that touch will
not allow us to "imagine" figures. He argued that to imagine figures
we have to separate the lines or borders of shapes from their background,
and this requires the lines or borders of figures to be defined by
"different colours" than the background. This is clearly silly. We can
imagine raised lines, not just coloured lines. In both vision and touch, we
perceive continuous lines induced by rows of dots. The spaces between the dots
have no specific colour or height. The perceived lines too have no colour or
height. We can combine the dots,
use memory, and later refer to the result, but not imagine what we have done?
Tut-tut! Imagining and referring seem suspiciously like the same operation by
two names.
There are chicken-and-egg problems
in Diderot. What enables us to know that a set of points is in a straight line?
The line itself cannot tell us, because we could be suffering an illusion.
Evidently there is an empirical question here: What is taken to be straight and
crooked in touch? We cannot just
assert straight things perforce are perceived as straight.
Let
us take away from Diderot one valuable idea: touch involves reaching from a
direction, and so we have at least one kind of vantage point in touch.
Running
counter to his own restrictions on imagination, Diderot noted that a blind man could consider a sphere, and
then envisage a smaller or larger object, with the same shape. Change of scale
leaves shape invariant. In this fashion, the blind man could imagine the
terrestrial globe. Atoms and molecules can be imagined in the same fashion. But
further, surely the blind man can imagine where atoms or celestial spheres are
in direction from us: in front of us or to one side, near or far, small
distances or huge ones. That is, direction in touch implies a wide range of
distances cognitively, it is likely.
The
idea that touch supports implied relations between ourselves and objects
deserves attention. Just as touch's reaction to truly straight things needs to
be explored, so too our consideration of possible vantage points, some real and
some imaginary, cannot be taken for granted. Just as there are explicit numbers,
numbers that are implied, numbers that are real and numbers that are imaginary,
so too touch may serve an observer entertaining many kinds of vantage points,
some real and some imaginary. Since we reach out in certain directions
deliberately, we obviously have goals before we reach: The imagined or perceived
directions of targets. Since we can intend to move our vantage point to pick up
objects that are just out of reach at the moment, we can imagine moving our
vantage point.
What about the relations between two directions? It is this that gives us perspective, let us note. Descartes (in 1638, see Boring, 1942) came close to posing systematic parallels between vision and touch in their use of perspective, and his conjectures may have influenced Diderot writing about a blind man reaching out with a stick. Descartes explained that in visual fixation our two eyes converge on a target (Cabe et al, in preparation). The directions the two eyes are gazing are like two rods crossing at the target (Boring, 1942). If we can look from two directions in a fashion that is like using two sticks, and gain perspective on an object's location and distance, surely we can reach with one stick in two directions and gain similar knowledge. Descartes described a blind man holding two sticks that intersect a short distance in front of him. Descartes suggested the man could estimate the distance to the crossing-point. The estimate would be made using a kind of natural geometry, Descartes believed, based on the angles of the hands, wrists and arms, and similar effects would arise in visual convergence. In fact, vision is poor at using convergence angles, Gogel (1961) concludes. However, Cabe et al (in preparation) report touch is quite good at estimating the distance to the intersection of two hand-held sticks. The pairs of sticks Cabe et al tested were positioned one to the left of the median plane, one to the right. Distances of intersections in front were estimated more accurately than ones to the side, which were underestimated more as the intersection departed from straight ahead ( possibly because the angle of intersection diminished considerably, and when it was tiny it was overestimated).
Tactile pictures: Eriksson's history
Eriksson
(1998) argues that the discussion of the relation between vision and touch from
the Seventeenth century onwards likely influenced many educators of the blind.
She has written about the manufacture of tactile displays for the blind from
1784 to 1940. Many of these displays were pictures in raised form, using solid
lines, or dotted lines, or bas-reliefs. In the writings of the pedagogues
Eriksson surveyed ideas about touch as a spatial sense are evident. But once
again the claims about touch offer imperfect parallels with vision.
Like Diderot and Locke, educators in France, Germany and Britain stressed
that motion was needed for tactile perception of shape, size and distance.
Consider a few discussed by Eriksson. In France, in the early Nineteenth
century, Guilli
e noted the blind can only have successive ideas of the objects they
touch. But then he added that they can perform a secondary task, to bring these
impressions together, and perhaps a third task in order to compare impressions.
In Britain, in the middle of the Nineteenth century,
Fowler discussed passing our fingers over a table slowly or quickly. The
rate of motion and the time taken indicate the size of the table. We only need a
few contacts to get the impression of a continuous table surface. In Germany, at
the turn of the century, Heller wrote
that the mobility of the hand is a key condition for the development of the
blind person's sense of direction. He wrote about blind pupils examining tactile
displays. using active exploration. He also described a subject reporting
imagining an index finger moving from one point on a tactile display to another.
But Heller then argued that it would be in vain to try to form a total
impression of a large object close to us via touch. To form just such an
impression we have to remove the object to a larger distance, mentally, and
somehow reduce it. It is impossible to get a simultaneous impression from a
large object -- only a small object will permit this in touch, Heller
hypothesized.
Eriksson found tactile displays made by Martin Kunz (1847--1923) in
institutes for the blind throughout Europe and North America. Kunz not only made
many displays, he theorized about the abilities they called on. In one
extraordinary conjecture, he claimed blind people often do not have a sense of
distance. He argued the sense of distance does not develop if one becomes blind,
notes Eriksson (1998, page 77). In
a curious turn of events, Kunz proposed the blind could have a well-developed
sense of location, indicating the places of many objects, but this was by no
means the same as a sense of distance.
The teachers who worked with the blind, and the manufacturers and
designers who prepared tactile pictures, were likely deeply influenced by the
debates among philosophers and education theorists. Gall (in 1837, cited in
Eriksson, page 92) , a Scottish clergyman who prepared tactile pictures, wrote
"The Blind can feel the shape of any image they can handle; but not
having any idea of perspective, it is only an outline which can be
perceived". Later, in the century Martin Kunz , discussing tactile
pictures, wrote that for the finger there is no perspective, Eriksson reports.
Perspective and directions from the observer's vantage point are often at
issue, but in disjointed ways, in the instructions accompanying pictures for the
blind. Consider the caption for a picture of hot-air balloons in a picture book
for the blind, published by the National Institute for the Blind, London, in the
1920s. Eriksson (page 127--128) quotes the caption: "Imagine the
rectangular border represents an open window.... Inside the border represents
open space... Stretch out your arm through the window... and move it about in
every direction.... If you could stretch out your arm till it was five or six
hundred yards long, you would be able to touch the nearest balloon...... a
second balloon is shown....It is really the same size as the first, but being
much further away it has the appearance of being smaller and fainter...." ( I have abbreviated the
caption.)
Directions from the observer's vantage point are adumbrated usefully in
this caption. But inconsistent use of spatial terms obscures the lesson. The
caption tells us that a balloon lies in a certain direction. But then it alters
its set of key terms. It does not spell out what aspects of direction are
relevant. It changes its terms to appearance, size and faintness. As Lopes
(1997, p. 438) writes, having "identified the picture surface with the
visual field", the use of the significant term "direction" is
over-ridden. The caption could have said the direction of the top of the nearby
balloon is close to the direction of the top of the window frame. It could have
added that the direction of the basket at the bottom of the balloon is close to
the bottom of the window frame. The differences in direction result in the
nearby balloon almost filling the window frame. The directions to the top and
bottom of the frame, it could have said, are slightly wider apart than the
directions to the top of the balloon and the basket. Then, the caption could
have added, the more distant balloon has a small difference in direction between
its top and its basket. It could have pointed out that as the balloons recede
the differences in the directions diminish.
It
is distinctly odd that most theories of touch argue spatial touch requires
motion, in particular directions, in straight and curved paths, and stress that
otherwise we have little except pressure in the finger and the resistance of
surfaces, but then draw in their horns when talking about pictures. Often,
theory of touch in concert with pictures describes "fingers", and
fails to entertain motion, direction, mobility, and any of the other
degrees of freedom that give spatial touch its flexible modus vivendi and its
information. Theorists opine that the blind can touch surfaces and imagine
directions. They do not go on to say that blind people touching a picture can
take some picture elements as telling us about directions from a vantage point.
The result is interminably one-sided discussions.
Lopes (1997) argues that the proper conclusion "to draw here is that perspectival perception is not unique to vision. It is part of any conception of space that enables us to move around our environment, and will be present in experiences in any sense modality that represents space. If perspective is spatial and not distinctively visual, then the argument that vision differs from touch because a component of its content is perspectival, characterized as shapes and sizes on a visual field, is unsound" (page 437). He adds that we have made a mistake in interpreting vision as like a picture, and a picture as solely visual. We redoubled the error in interpreting images in the head as like pictures, and therefore as like vision. He continues "The error is compounded when, having postulated pictures in the head, we then explain pictures on the canvas by means of their alleged similarity to those postulated mental pictures." (p.439). See also Costall (1990), and Cutting and Massironi (1998).
Recent evidence
The history of thought on touch, the blind and space is full of
unfortunate assumptions. There is no a priori reason to insist touch is a
non-spatial medium. Motion around an environment could well give a blind person
an astute awareness of the relative distances between points, and their
directions with respect to each other and the observer.
In practice, how well do blind subjects know the relative distances
between parts of a room? Here I will describe some of the key studies of the
past decade. For an analysis of studies that paved the way to these reports see
Millar (1994), Kennedy (1993) and
Kennedy, Gabias and Heller (1992).
Haber, Haber, Levin and Hollyfield (1993)
tested 7 blind, highly mobile adults ( two of whom were congenitally
blind) estimating the absolute distances between 10 objects in a familiar
room. The subject sat at one location in the room, surrounded by the
target objects, to make the
estimates. The estimates were all
closely correlated to actual distance, regression analyses showed (all
correlations within the range .84 to .99). There were no differences between
early-blind and late-blind subjects. The two-dimensional map of distances and
locations one could draw using the subjects' estimates closely matched the
actual room, as an 88% scale model. There
was a single zero point for the estimates-- the observer's vantage point.
Haber et al compared the distance estimates of the blind to estimates
made by sighted subjects, in the same room. The sighted subjects were familiar
with the room. They found no significant differences between the estimates of
the blind and the sighted, except that the sighted underestimated the distances
slightly less (5%) than the blind (12%). Haber
et al asked how minor variations in the zero point for the blind compared to
similar minor variations in the sighted. They found no significant differences.
Instructively, Haber asked the sighted and the blind subjects about the
10 objects in a second condition. In this second condition, the subjects sat in
a second room, and were asked about the remote room. The results were the same for both the sighted and the blind:
the original, now remote room space was reported as if it had shrunk by 30%!
Haber et al were not sure of the reasons for the underestimations of
distances, but whatever factors were involved they may be similar in the blind
and the sighted. The major findings are the accuracy of the estimates and their
close correlations with actual variations in distance.
Haber
et al compared the estimates of distances between pairs of objects that could be
joined without intervening barriers, and pairs of objects that had intervening
barriers. There were no significant differences between these distinct pairs,
for either the blind or the sighted. Evidently, occlusion that requires a detour
in travelling between objects did not impair distance judgments in the blind.
Haber et al.'s results are similar to those from Loomis et al (1993),
comparing sighted and blind adults
on spatial navigation tasks. Landau, Spelke and Gleitman (1984, see also Landau
and Gleitman, 1985; Landau, 1991)
tested blind children on their ability to find their way from one object to
another in a room. She reported that if a preschool child learns about four
objects A, B, C and D, by walking from A to B to C to D, thereafter the child
will be able to walk from A to C, ignoring B. The specific route from A to C
does not have to be taught. However, Lockman et al (1981) and Reiser et al
(1980) find the more experience one has with travelling without sight the more
accurate the response to spatial-layout tasks.
Wagner et al (1996) explored the connection between stepping motions and
spatial judgments. Using a treadmill, they varied the rate at which a step
changed the observer's spatial location. Subjects swiftly adopted the new
coordination. The effect was perceptual, not a conscious correction, because
there was an aftereffect once the normal coordination was restored.
Morrongiello et al (1995) videorecorded blind and blindfolded-sighted
children undertaking the Landau ABCD four-locations task. They coded the paths
taken, accuracy of initial turns, closest positions and final positions relative
to the target locations. They also devised a composite score to assess the
efficiency of the path taken. The mean age of the blind children was about 7
years, with a range from 4 years 5 months to 9 years two months. They were all
congenitally totally blind (that is, none had ever had sensitivity to light).
The blind children performed like the sighted children on all the measures
except accuracy of the final position, at which they were slightly worse.
The children were also asked to draw some visible or tactile maps, e.g.
showing the route from B to D. The proportion of sighted children who drew correct maps of their routes was about 20% for visible maps or tactile maps at age 4-5. At
age 6-7, the proportions were still about 20% for visible maps, but had risen to
33% for tactile maps. At age 8-9, the proportions were 50% for visible maps and
58% for tactile maps. The majority of the children received the same score for
both maps. Evidently, the map-making ability is modest in preschoolers, grows
slowly, and tactile-map making is at least the equal of visible-map making. The
tasks tap one ability, in the sighted at least, it seems likely.
Morrongiello et al. note that "examining the nature of spatial representation is a challenging task because of ambiguity in the link between representations and behaviour." (p. 228). This was one reason their study used three tasks. On their easiest task--walking between the ABCD destinations--only a few of their youngest subjects accurately reproduced distance and angle information when pursuing novel routes. No three-year-old available to the investigators could be persuaded to undertake the task. In contrast, the actual routes on which the subjects were trained, and reversals of the routes,
were executed summarily.
Measures of spatial ability
Were the youngest children discussed in Morrongiello et al (1995) ever
reluctant to run novel routes for fear of novel obstacles? Could they have
pointed in the right direction (from B to D)? Standing at B could they have told
whether a sounding brass or a tinkling symbol was located at D? Some response measures reveal more about basic spatial
abilities than others (Millar, 1985, 1994).
Haber, Haber, Penningroth, Novak and Radgowski (1993)
tested body postures, including the use of body extensions such as
pointers, and external pointers, as ways of indicating the direction of sound
sources. The subjects were twenty blind adults. The sources occupied a semicircle from the extreme left,
through straight ahead to the extreme right. The body parts pointed at the
target included the index finger of an outstretched arm, the observer's nose and
the chest, to which a pointer was affixed at right angles. The observers also
tried to point hand-held rods at the target. All these measures were superior to
methods that used external pointers
on their own bases, like rotating a dial. The least accurate methods involved
drawing or offering a verbal description using clockface labels such as noon for
extreme left, 3pm for straight ahead and 6pm for extreme right. Generally, the
body part or hand-held rod methods
were less variable as well as more accurate.
Subjects who travelled less were particularly poor at the more difficult tasks with high variance such as the clock-face or drawing tasks. On the other tasks, the groups performed alike. Travel skills and experience may affect performance on relatively indirect measures of space perception, and have little bearing on basic body orientation to the local environment, it seems (Millar, 1985).
Triangular routes and convergence
Worchel
(1950) argued blind people guided along a right-angled triangle route from
origin A to the right angle at B and then to a terminus C often cannot then walk
back from C to A along the hypoteneuse. This extreme claim
is surely false (Klatzky et al, 1990; Millar, 1994). Errors will be made,
but the basic principle that the triangle exists in a two dimensional space,
with directions and spatial extents, is understood by most blind people
intuitively, provided they have no loss other than sight (Kennedy and Campbell,
1985). That is, the blind person knows that extents subtend different angles at
our vantage point as they recede: Directions to the ends of the extent converge.
The
principle of convergence applies in both the horizontal plane and the vertical
plane. We can point to the bottom and top of nearby and distant trees or to the
gaps between columns that are near and far. Both entail convergence. If we point
upwards to a bird flying away from above our head we will find the direction
changes swiftly at first and then more slowly. Likewise, if we point to a
mouse running away from between our feet we find the directions change
swiftly at first and then slowly. Generally, blind people understand this
(Kennedy, 1993). The arms pointing
up to the bird and down to the mouse converge. They converge more and more
slowly as the imagined distances of the bird and mouse increase. Both arms stop
at the horizontal. When they stop they are pointing at the horizon.
The principle of convergence applies to all dimensions of space. It
applies to small scales--the tabletop or manipulative space-- and to larger
scales such as the domestic or room-sized space and the ambulatory space of say
a few hours walk (Millar, 1994).
The space to which convergence applies consists of directions (angles)
and distances from a vantage point. Do we confuse these two? Klatzky (1999)
tested angle and distance errors in touch, finding they were often unrelated.
Klatzky (1999) looked for the origins of systematic errors made when
blindfolded subjects attempt to walk two legs of a triangle and then to indicate
where the origin is from the final destination. She asked whether errors are introduced when subjects attempt
to imagine their walk and final destination has been displaced by a rotation
about the terminus, or by displacement to one side.
She found more errors for distances than directions (angles).
Interestingly, she suggests some errors are due to subjects emphasing
body-centred coordinates: information about space is referred to the observer's
vantage point, and any errors arise from misestimating the body's location and
where it is facing.
Other errors, she argues, come from use of objects or
landmarks as the primary basis for spatial understanding: object-centred
coordinates. She contends body-centred
errors are more evident before subjects entertain imagined rotations or
displacements. But object-centered errors are more common after imagined
rotations.
Despite the presence of some errors, the subjects tested by Haber et al, and by Klatzky, performed quite consistently in spatial tasks, whether they were in the target location, or imagining locations, directions or displacements.
Pictures
and the field of directions from a vantage point
When we use a picture, the very medium of picturing has distinctive
implications (Pierantoni, 1986). We can look at a picture pinned to a wall, and
take it to show a person in profile standing upright. If we take the picture off
the wall and lay it on our desk it will be horizontal. Does this make the man
appear to be lying down? Well, no. We take the picture to be a medium whose
vertical or horizontal orientation is not relevant to the orientation of the
profile (Kennedy, 1993). The profile remains that of a person standing erect.
Consider the picture lying on the table. The profile can be turned so
that its nose points toward the observer, or away from the observer.
Then blind and sighted subjects can be asked when the profile seems to be
facing down to the ground and when it faces up to the sky. In a demonstration I
recently conducted, LT, a totally blind man, whose blindness had an early onset
due to retinitis pigmentosa, reported when the nose (in a horizontal picture,
made of raised lines) was pointing towards him the profile seemed to be facing
the ground. When the nose was pointing away from him, the profile was facing up
to the sky.
The profile demonstration
suggests people use a field of directions around a vantage point centred on
their heads to interpret tactile pictures. What is higher in the field of
directions is taken as representing objects that are higher in the vertical
plane in the world. As another example of this use of a field of directions,
consider the text on this page. Notice that the text can be read while resting
on the table, vertical as if fixed to a wall, or held overhead
as though fixed to a ceiling. Sometimes the text letters have their tops
farther from the observer than their bottoms, as is the case when they are on
the table. Sometimes the tops of the letters are closer to the observer than
their bottoms, as is the case when they are on the ceiling. But in both cases
the tops of the letters are higher in the field of directions than the bottoms.
And in both cases the letters appear upright (Mirabella and Kennedy, in press).
If someone draws a U on our hand as it lies palm-up on a table in front
of us, and we have our eyes closed, we can read it as a U. If we turn our
forearm to bring the hand in front of our tummy, palm up, thumb pointing away
from us, still resting on the table, the same shape on our skin will usually
be read as a C. This suggests the vantage point at our head controls the
apparent shape and identity of the form traced on our skin. What is higher in
the field of directions from that vantage point is the top of the letter.
Many investigators have used "cutaneous perception" tasks in
which a letter is drawn on the skin, with a blunt stylus, on sites scattered
around the observer's body. Subjects seem to entertain a variety of vantage
points in three dimensions. Natsoulas (1966) drew b, d, p and q on the subject's
forehead. Subjects generally used
one of two vantage points, the results suggested. One had an internal location.
The subject behaved as if "looking"
from a position within their head. The second had an external location,
like a "disembodied eye" (Concoran, 1977). The subject behaved as if
looking from the experimenter's position standing in front of the subject. Often
subjects could readily change from one apparent vantage point to another.
Similar flexibility of vantage points in assessing forms drawn on the skin is
evident in studies on blind subjects (Shimojo, Sasaki, Parsons and Torii, 1989).
Further, the confusions subjects report are between p and q, or b and d.
Subjects maintain the vertical orientation of forms on the forehead.
The studies on profiles and letter forms suggest what is high in the field of directions around a vantage point is "the top" of the object, whether the form is on a table, a wall, a palm or a forehead. But they also suggest vantage points that are disembodied can move horizontally to some extent. It is also relatively easy to imagine being above an array of objects, with "a bird's eye view", as tactile maps often require us to do. But rotation of a vantage point and its field of directions from erect to tilted or inverted (a roll, as opposed to yaw or pitch) is likely more problematic for the observer. Also, rotation of tactile maps in the plane (yaw) by 180 degrees , like rotation of visual maps on a table, often confuses observers about left and right.
Arcs and elevators
Another
kind of rotation moves the object in a vertical arc around the vantage point
(pitch), from a table top to the ceiling. Objects with conventional fronts such
as raised letters and pictures can move in an arc like this and remain upright
and easily legible. Their
orientation is invariant across this arc. They are always "directed
to" the vantage point, and their top is "upwards" in our field of
directions. Orientation and direction are related concepts, by definition. A
tactile vantage point is a location from which orientation or direction can be
established.
Some objects maintain their orientation in an "elevator"
transformation but not an arc. Cups would spill their contents if moved in an
arc, but elevated vertically from a
desktop to a shelf they keep their usefulness (that is, their "affordances",
Gibson, 1979). They are "directed upwards", as is an elevator. The
elevator transformation is often independent of our vantage point. The arc has a
vantage point as its origin. The elevator transformation, however, can take an
object past our vantage point. The cup varies in its accessability, for it can
be too low or too high for our reach, from a given vantage point. Also, when we
imagine a bird's eye view on a room we have often raised our vantage point as if
on an elevator, while maintaining what is to our front and back, left and right,
and likely this is straightforward for many blind and sighted observers. (A
birds's eye view may be what Haber et al.'s
observers undertook when asked to report distances in one room while sitting in
another room.) When we move left or right (slide), or to the front or back
(to-and-fro) we maintain what is higher or lower than us. These motions preserve
direction--what is straight ahead on the horizon--tactually and visually. Again,
these motions of the vantage point in the plane are familiar to the blind and
the sighted, surely.
The elevator transformation does not invert an object. But a raise
reveals the tops of solid objects (and lowering ourselves reveals the bottom).
The arc transformation does not invert a raised picture when it moves from a
table to overhead. But if the
display passes overhead and starts downwards again ( behind us, for example) it
inverts while following the arc. Both inverted solid objects and pictures likely
are relatively hard to identify in touch. A u-shape becomes an n-shape. Cookie
cutters that are inverted are often hard to identify in touch. Busts that are
inverted are too, informal class demonstrations suggest.
In sum, orientation at a tactile vantage point is often dependent on a
field of directions from that point. The vantage point may be disembodied to
some extent, in a variety of locations in the three dimensional space around the
observer, so the locus is moved forward or back, displaced to one side or
elevated. As Klatzky pointed out, it can rotate in the plane, remaining upright,
and still be quite usable. Invariant upright orientations of targets facilitate
tactile perception of form, it is likely.
Borders, media and foreground in vision and touch
A
disembodied vantage point from which we take directions to a few objects can be
envisaged relatively freely. It can be where we intend to go, before we actually
go there. But much of our tactile exploration aims to use richer or fuller
environments, with real edges of actual surfaces and textures of surfaces being
examined to determine where we are in relation to objects. Similarly much of
vision involves scanning borders of various kinds to get to know the vista
around us and our place in it.
How do vision and touch use surfaces, textures and borders, and the media
of perception, to recognize where
we are? Rubin (1915) defined a kind of foreground and
background perceived at a contour or line as figure and ground. He might
have added that our vantage point is always indicated by what is foreground and
background.
A line or contour can be defined by colour or luminance borders in
vision. What operates in a similar vein in touch? Touch reveals borders of
surfaces as changes in resistance, notably, and thermal and wetness or friction
properties secondarily. We can certainly detect one surface foregrounded on a
background in touch. One example is we can feel that a sheet of newspaper is
lying on a rubber mat such as a mouse pad. Further, we can feel one section of
our newspaper is lying on and partially covering another, and that on another,
and that on yet another etc. We feel a set of foregrounds and backgrounds,
specifying our vantage point in front of the most foregrounded sheet.
A flat surface offers resistance, ending at a border, giving way to
another kind of resistance. We can also feel two similar surfaces meeting at a
corner, with change in slant (where the top of our desk meets a wall, say). The
corner encloses our vantage point, if the corner is concave. We are outside the
corner, we feel, if it is convex, like the corner where the top of the desk
meets the side). A roofline of a model house is a tangible border (Heller et al,
1995). A rounded object seems to offer a definite border between front and back
to vision, and it also offers a border to touch if we take it to be in front of
a particular vantage point, say one from which we are reaching. To that vantage
point, the object presents a front, a back and a clear division between the two:
an occluding boundary of a rounded surface.
Raised line drawings of objects showing
corners and boundaries of rounded surfaces by lines are recognized in
similar fashion by blind and blindfolded sighted 8--13 year olds (D'Angiulli et
al, 1998). The blind children were congenitally totally blind. The performances
of the sighted and the blind children were highly correlated. The scores of the
blind and sighted children exploring the displays actively, with no external
guidance, were correlated .81.
Perhaps some aspects of textures operate similarly in vision and touch. A
sheet of paper and a rubber mouse pad have tangible textures, just as they have
visual textures. Variation in tactile texture is readily used as an indicator of
the slopes of surfaces, though it may be more immediately understandable
in vision (Holmes et al. 1998). Holmes et al.
presented blindfolded subjects with texture patterns with distinct
gradients, say dense at the top and gradually expanding to sparse at the bottom.
In a series of the patterns, the rate at which the texture expanded was varied.
Subjects attempted to match the texture patterns to panels that sloped from
vertical to nearly horizontal. Subjects scaled the magnitudes of the physical
slants of the panels to suitable texture gradients, provided the extremes of
both were evident, and some opportunities for learning were offered.
Early blind subjects performed like sighted subjects.
Vision also uses changes in the visibility of textures to indicate what
is foreground and background. Is there an equivalent at a tactile vantage point?
Consider the possible cases. "Accretion" of texture is texture
becoming evident in the optic array at our vantage point. "Deletion"
is the reverse. Gibson (1979) described the following case. A mousepad being
pulled our from under a foreground newspaper covering it is revealed by its
texture visibly accreting at its common border with the cover. If the pad is
progressively covered by the foreground newspaper being pushed over on top of
it, its texture is optically deleted.
Is there a tactile equivalent for Gibson's case? Certainly we can feel
the pad's texture moving out from under a cover sheet, accreting in touch,
indicating it was behind the cover and is now being exposed. And we also can
feel a cover sheet moving over a pad and concealing its texture.
Besides Gibson's case, vision also has texture accretion and deletion
that occurs because of illumination, and a rather different kind that
occurs when the medium for light transmission has elements swirling in it. A
searchlight playing over a prison wall in a 1930's black-and-white movie often
reveals the wall's texture of bricks accreting at the leading edge of the pool
of light. The texture at the trailing edge is deleted, falling into
invisibility. The accretion and deletion specify a single surface, the wall, as
foreground, with no other surface as background. Shadows racing over the wall
have similar effects.
A searchlight beam passing through empty air is invisible till it hits a
reflector (the wall, in this prison movie). To add to our little drama, consider
the night air is full of twirling snowflakes, gently falling. The snow may be
invisible till it enters the searchlight beam (optically accretes). It passes
into invisibility again when it falls through (deletes). The accretion specifies
the texture elements are in the foreground. So too does the deletion. The
elements in turn specify a volume of space, which we often informally call
"the searchlight beam".
Snow in daylight can provide another case of accretion and deletion. If
our vantage point is in front of a dark surface, such as a dark hill set in a
snow-covered field, silhouetted against a white sky, falling snow may only be
visible when it has the hill as background. The snow can be invisible against
the sky. It only comes into view when it is in front of the hill from our
vantage point. It is invisible again when it is against the snow-covered field.
From our vantage point, the accretion of snow at the brow of the hill specifies
a volume of elements in front of the dark border, and the deletion
at the border at the base of the hill does the same.
Touch is hardly as richly endowed as vision with cases of accretion and deletion. Touch generally operates more like an eye scanning over elements on a fixed surface. We often look at an element, allow it to fall into our periphery, and then gaze away so the element is too far to the side to be seen. Similarly, when touch runs a hand over a surface, texture elements come onto our skin, pass across the skin, and then fall too far behind to be in contact. However, we are not totally without some tactile media. In addition to direct contact, touch can use a rod as an intervening medium to feel the roughness of a surface (Lederman, 1982). Textural roughness in the surface we are stroking causes vibrations to arise in the rod , and to vary as we pass over different surfaces. Further, touch can use covering material as a medium to "palpate" surfaces beyond the cover. The media through while we feel add their own roughnesses, vibrations and thermal properties we have to discount to detect the distal target. When we drive a car or ride a bike, the car or bike will tell us about the rough road we are careering over. They also add their own vibrations. That is, we often have to realize what arises because of media between our vantage point and the target. Touch isolates vibrations in a medley and discerns which are from sources near to us and which are from afar.
Like vision, touch offers accretion and deletion of texture from two
surfaces in close proximity to each other. The event specifies foreground,
background and our vantage point. Unlike vision, touch's use of media does not
involve accretion and deletion due to a secondary source of energy, such as a
searchlight, or texture in a medium accreting and deleting.
Coda
There
are many ways in which vantage points arise in touch, in tactile tasks and in
space perception served by touch, because touch deals with direction. Some show
us limitations of the observer. Some are disembodied, like intended locations.
Some suggest practical ways of using pictures, using outlines for edges, and
directions of elements for directions of referents. Some are especially easy to use when they allow invariants in
the apparent orientation of moving objects. Some suggest perspective is present
in touch. The parallels with vision are extensive, as consideration of direction
shows, but not complete, as consideration of accretion and deletion of texture
reveals.
I
may have left the impression that vision's vantage point is plain at all times.
This was handy to introduce my topic. But I should not let it go at that. Our
visual impression that we have a single vantage point optically is deceptive. We
look with two eyes, not one. But objects often seem to lie in one visual
direction. The laws of visual direction, and the conditions under which we will
seem to have one visual vantage point, are now a subject for animated debate
(Ono and Mapp, 1995). It seems
fitting that we should realize we could try to discover the laws of vantage
points in touch just as fresh views of visual vantage points are being explored.
References
Arnheim, Rudolf (1974) Art and visual perception
: The new version. Berkeley and Los Angeles: University of California
Press
Boring, Edwin G. (1950) A history of experimental
psychology New York: Appleton Century Crofts.
Boring, Edwin. G. (1942) Sensation and perception
in the history of experimental psychology. New York: Appleton Century
Crofts
Cabe, Patrick A., Wright, C. D. and Wright, M.A. (in
preparation) Descartes' blind man revisited: Bimanual triangulation of
distance using static hand-held rods.
Concoran, D.J.W. (1977) The phenomenon of the
disembodied eye or is it a matter of personal geography? Perception, 6,
247--253.
Costall, Alan. P. (1990) Seeing through pictures Word
and Image, 6, 273--277.
Cutting, James. E. and Massironi, M. (1998) Pictures
and their special status in perceptual and cognitive enquiry. Perception
and cognition at century's end , 137--168. New York: Academic Press.
D'Angiulli, Amedeo, Kennedy, John M. and Morton,
Morton A. (1998) Blind children recognizing tactile pictures respond like
sighted children given guidance in exploration. Scandinavian
Journal of Psychology, 39, 187--190.
Edman, Polly (1992) Tactile graphics. New
York: American Foundation for the Blind.
Eriksson, Yvonne (1998) Tactile pictures:
Pictorial representations for the blind 1784--1940. Gothenburg:
Gothenburg Studies in Art and Architecture.
Gibson, James J. (1979) The ecological approach to
visual perception Boston: Houghton-Mifflin.
Gogel, Walter C. (1961) Convergence as a cue to the
perceived distance of objects in a binocular configuration. Journal of
Psychology, 52, 303--315.
Haber,
L. , Haber, R.N., Penningroth, S., Novak, K. and Radgowski, H. (1993)
Comparison of nine methods of indicating the direction to objects: data from
blind adults. Perception, 22, 35--47.
Haber, Ralph N., Haber, L. R., Levin, C. A. and
Hollyfield, R. (1993) Properties of spatial representations: Data from
sighted and blind subjects Perception and Psychophysics, 54,
1--13.
Heller, Morton A., Kennedy, J.M. and Joyner, T.A.
(1995) Production and interprestation of pictures of houses by blind people Perception,
24, 1049--1058
Heller, Morton A. and Kennedy, J. M. (1990)
Perspective taking, pictures and the blind. Perception and Psychophysics,
48, 459--466.
Holmes, Emily., Hughes, Barry and Jansson, Gunnar
(1998) Haptic perception of texture gradients. Perception, 27,
993--1008.
Kennedy, John M. (1993) Drawing and the blind.
New Haven, Ct: Yale Press
Kennedy, John M. (1997) How the blind draw
Scientific American, 276, 60--65
Kennedy, John M. and Campbell, J.A. (1985)
Convergence principle in blind people's pointing. International Journal
of Rehabilitation Research, 8, 189--210.
Kennedy, John M., Gabias, P. and Heller, M. A. (1992)
Space , haptics and the blind Geoforum, 23, 175--189.
Klatzky, Roberta L. (1999) Path completion after
haptic exploration without vision: Implications for haptic spatial
representations Perception and Psychophysics, 61, 220--235.
Landau, Barbara (1991) Spatial representation of
objects in the young blind child Cognition, 38, 145--178.
Landau, Barbara and Gleitman, Leila R. (1985) Language
and experience: evidence from the blind child Cambridge, MA: Harvard
Press
Landau, Barbara, Spelke, Elizabeth and Gleitman, H.
(1984) Spatial knowledge in a youing blind child Cognition, 16,
225--260.
Lederman, Susan J. (1982) The perception of texture
by touch. In W. Schiff and E. Foulkes (Eds.) Tactual perception: A
sourcebook Cambridge: Cambridge University Press
Locke, John (1690) Essay concerning human
understanding. London.
Lockman, J. J., Rieser, J.J. and Pick, H. L. (1981)
Assessing blind traveller's knowledge of spatial layout. Journal of
Visual Impairment and Blindness, 7, 321--326.
Loomis, Jack M., Klatzky, R. L., Golledge, R.G.,
Cicinelli, J.G., Pellegrino, J.W. and Fry, P.A. (1993) Nonvisual navigation
by blind and sighted: Assessment of path integration ability Journal of
Experimental Psychology: General, 122, 73--91.
Lopes, Dominic M.M. (1997) Art media and the sense modalities: Tactile pictures. The
Philosophical Quarterly, 189, 425--440.
Millar, Susanna (1985) Movement cues and body
orientation in recall of locations by blind and sighted children. Quarterly
Journal of Psychology A, 37, 257--279.
Millar, Susanna (1994) Understanding and
representing space. Oxford: Oxford Press.
Mirabella, Guiseppe. and Kennedy, John M. (in press)
Which way is upright and normal? Haptic perception of letters above head
level. Perception and Psychophysics.
Morgan, M. J. (1977) Molyneux's question Cambridge: Cambridge Press
Morrongiello, Barbara A., Timney, B., Humphrey, G.K.,
Anderson, S. and Skory, C. (1995) Spatial knowledge in blind and sighted
children Journal of Experimental Child Psychology, 59, 211--233.
Natsoulas, T. (1966) Locus and orientation of the
perceiver (ego) under variable, constant and no perspective instructions. Journal
of Personality and Social Psychology, 3, 190--196.
Ono, Hiroshi. and
Mapp, A. P. (1995) A restatement and modification of Wells-Hering's laws of
visual direction Perception, 24, 237--252
Pierantoni, Ruggero. (1986) Forma fluens.
Torino: Boringhieri.
Revesz, Gregor. (1950) The psychology and art of
the blind London: Longmans Green
Rubin, Edgar (1915) Synsoplevede figurer.
Copenhagen: Gyldendals
Shimojo, S., Sasaki, M., Parsons, L. M. and Torii, S.
(1989) Mirror reversal by blind subjects in cutaneous perception and motor
production of letters and numbers. Perception and Psychophysics, 45,
145-152.
Turvey, Michael (1995) Dynamic touch American
Psychologist, 51, 1134--1152.
Wagner, D., Pick, H. L and Rieser, J. J. (1996) Two
processes in the recalibration of rotary motion Paper presented at the
meeting of the Psychonomic Society, October 31--November 3, Chicago.
Worchel, P. (1950) Space perception and orientation
in the blind Psychological Monographs, 65 (15).