您的当前位置:首页正文

Rethinking Boundaries

2023-04-01 来源:好走旅游网
Rethinking Boundaries

William Keays

The Fantastic Corporation 6928 Manno, Switzerland email: keays@alum.mit.edu

Abstract

Amidst an unprecedented diversity of artistic modes brought forward by a massive influx of information and communication technology, the notion of interface, the connection between the person and the object, between the real and the virtual, assumes a role of paramount significance. Where once the object, the medium, and the roles of its creator and audience were securely defined, now nothing can be taken for granted. Just as the modes of communication are undergoing a fundamental transformation, so is the nature of the artist. This talk discusses a body of work undertaken by the author that is focused on the boundary between physical and electronic realms while simultaneously raising issues that, due in part to the fundamentally technological nature of the work, challenge conventional notions on the delineation of artistic practice.

Introduction

What are the roots of interaction between humans and their virtual worlds? Although many of the interactive systems in use today such as the virtual reality CAVE have obvious lineage in ritualistic activities reaching as far back as the prehistoric caves of Lascaux, the work presented here approaches the matter from a more specific view, that of the artist in contemporary society. For this purpose the chosen point of departure is the early twentieth century when the role of the artist was clearly defined and distinct from that of the scientist, and suddenly something remarkable happened: art begins to move, literally. Gabo, Calder, and Moholy-Nagy had created works with moving parts demanding a whole new method of interpretation.

From this point on art no longer functions solely as an intellectual exercise absorbed through the eyes, but becomes a dialogue involving multiple senses; sight, sound, and touch. This movement flourishes in the post-WW2 period with a new pluralism in artistic expression and an influx of new technology. By the mid-sixties several groups came into being such as Billy Kluver and Robert

Rauschenberg’s E.A.T (Experiments in Art and Technology) having the specific objective of merging art and technology.

Of course there is another very significant technological and cultural phenomenon going on during this period, which is the proliferation of communication and computing into the masses; nothing short of a paradigm shift. Engineers and designers work tirelessly to define the languages and protocols that will enable people to discover, navigate and control this new world. So as and endnote to modernism we see a parallel effort emerging between artists and scientists on the boundary of the real and the computer generated virtual universe. Both ultimately focused on diminishing the presence of that boundary while the long established boundary differentiating their roles in society dissolves in unison.

Thus is defined the psychological mindset in which the work presented here was created. This complete body of work was initiated at the MIT Media Lab during my tenure there as a graduate student. It is focused on four themes: alternative interfaces, sculptural 3D displays, interaction floors and site-based interactive art.

Alternative Interfaces

Before getting into the technical details of these prototypes, it would seem appropriate to establish the motivation for this course of investigation. The impetus is derived from the unsatisfactory qualities that are observed in the use of our media in conventional terms. These can be observed both in interactive art applications and in general purpose computer applications, the standard computer “desktop” serves as an example.

The “desktop” metaphor is certainly the most common interface humans have with virtual realms. It consists of a series of windows on a monitor containing text and images controllable by mouse and keyboard. This seeing and pointing configuration was invented by Xerox in the 1970’s and became massively popular in the mid-eighties with the advent of personal computers. It represents an efficient means of managing computer-related tasks given the very limited input/output devices available; but at the same time it can be somewhat unsatisfactory and can be seen as a barrier as much as a conduit.

MIT professor Hiroshi Ishii states the following on the current state of this boundary: “We live between two realms: our physical environment and cyberspace. Despite our dual citizenship, the absence of seamless coupling between these parallel existences leaves a great divide between the worlds of bits and atoms. At the present, we are torn between these parallel but disjoint

spaces.”1 In an unrelenting effort to address this situation, Ishii and his Tangible Media research group have created a large number of physically oriented prototypes that attempt to diminish the boundary between the real and the virtual, none of which use a mouse, keyboard or desktop metaphor. Instead, physical objects are manipulated bearing a high affinity with the virtual component. Thus a tight feedback loop is established between the actions of the user and the response, and the presence of the technology is effectively diminished; these are the qualities I sought in creating the following prototypes.

Fabric membrane Interface

This prototype explores the potential of a tactile interface where the physical qualities of the manipulative have a very high affinity with the corresponding visual effect. A section of stretchable fabric (Lycra) is installed, in tension, over an upright rigid frame. The membrane is placed alongside the computer display it will control. Behind the membrane is placed a video camera pointed toward the membrane. A light source is placed in front of the membrane. The user controls the display on screen by pressing gently into the fabric. When the interface is in use, changes in light intensity occur on the backside of the membrane. These changes are observed by the video camera and are analyzed by the running program.

Figure 1. Fabric interface.

The display component of this program consists of a matrix of squares positioned in three-dimensional space. The surface of this array of squares corresponds to the area of the membrane interface. The squares are assigned properties of motion that mimic the physical properties of the fabric of the membrane. More specifically, each square is attached to each adjacent square

1 Ishii, Hiroshi and Brygg Ullmer(1997) Conference paper, CHI’97 Conference

Proceedings, p. 234.

by a virtual spring. When one square is moved, the adjacent squares will follow suit as if a real spring were in between them.

When the user pushes in on the surface of the membrane, squares on the display in the corresponding region are moved along the negative Z-axis accordingly. As all the squares are in an interconnected network of invisible springs, the whole image on the screen assumes a coherent elastic behavior that works synchronously with the physical membrane at the users fingertips.

Kinetic Interface

A plaster cube is suspended in space 1m above the ground by six cables. The cables are anchored visibly to the floor and ceiling with stretched fabric. Attached to the cube is a motion and orientation sensing device. Motion in the Z, Y and Z directions can be sensed as can rotational movement in all three axes.

Figure 2. Kinetic interface.

On the screen is a three-dimensional model consisting of a 4 by 4 array of semi-translucent cubes. At the center are colored slabs, which travel in the gaps between the cubes. They are color-coded red, blue and green, to indicate the X,

Y, and Z axes. When the suspended cube is moved along any of the three axes, the corresponding slabs move in unison.

The key aspect of this device is that its aesthetic configuration is such that the user will recognize the physical dynamics of the interface upon visual contact. When using it, a tight, effortless association between the physical gestures of the user and the activity on the screen, thus diminishing the barriers normally present in interface design.

Fiber-Video Input technique

At a resolution of 640 by 480, an incoming video image carries over 300,000 discrete units of information in terms of color and intensity. Refresh rates ranging between 10 and 30 frames per second clearly make this a very high-bandwidth input channel. Normally all this information is part of a single coherent image, but it need not be.

An alternative type of input can be created by coupling bundles of optical fibers to a video camera. Instead of carrying images of its immediate surroundings, the camera carries an image of the light emanating from the ends of a large number of optical fibers as described below.

A bundle of optical fibers is truncated cleanly at a right angle and placed orthogonal and in close proximity to the plane of vision of a video camera. The camera is attached to the computer through a standard video port. Software on the computer is created such that the variations in light observed in each individual fiber can be read and interpreted. This makes it possible for an unmodified, standard computer to accept a large number (10,000 or more) of discrete inputs of intensity and color range. This supercedes the ability to accept discrete values from other conventional input devices by a large margin. The source of input can be derived from any direction, from any conceivable device that has the ability to control the light going into one or more optical fibers. The fibers do not require any power, do not generate any heat and can be tunneled into hard to reach areas.

Fiber-Video Input Prototype 1

In this prototype, a bundle of 37 optical fibers was integrated into a block-shaped interface made of Lego. 36 of the fibers entered the larger block where the fibers terminate flush on the surface of the object on the inside of each of the square slots. Three other fibers go into a smaller block with three thumb wheels on it. On the other end, the fibers are cut flush and coupled to a video camera.

Figure 3. Fiber-video prototype 1.

On the computer screen is a box consisting of a series of panels. The user uses the block interface to control the block on the screen. This is accomplished by blocking the light entering different sides of the interface block by placing fingers in the square slots. When this is done, the corresponding component on the display cube is activated. When the wheels on the smaller cube are rotated creating oscillations in the light level entering a given fiber and the cube on the screen rotates on the corresponding axis.

This prototype served as a working model of the fiber-video concept in that the camera was reading information that was not in a two-dimensional plane, and that devices to control the light input could be easily devised. It also demonstrated the need for a more robust fiber-to-camera coupling mechanism.

Fiber/Video Input Prototype 2

Having established feasibility with the first prototype, a second prototype was devised to better illustrate the principles of the concept, and to introduce a coupling device that was more robust and where the incoming fiber bundle would be fully coherent and addressable.

The second prototype also used a cube configuration, this time with 125 input fiber segments. In order to fully reveal the configuration, the device was made of clear plexi-glass, revealing all the fibers inside and how they terminate at the surface of the cube.

The coupling was completely revised. The fibers are placed before the camera in a regular matrix. Where before a specific mapping was required for randomly bundled fibers, here individual fibers are located by their position on the matrix. Furthermore, the matrix plate on the camera mount is interchangeable to enable alternate input devices to be used.

Figure 4. Fiber-video prototype 2.

The display created for this interface consists of a three-dimensional cube segmented to reflect the positioning of the input fibers on the input device. When the user alters the amount of light entering one of the fibers by placing a finger over it, the corresponding square changes in color and brightness accordingly; this change is accentuated by the movement of the corresponding panels. If the ambient light in the room was bright, the whole cube would become large, if all fibers were blocked simultaneously, it would become small.

Three-Dimensional Display

For this work I sought to create a three-dimensional display system where the physical construction of the display would work in unison with its display properties in a sculptural manner. Although a fair number of highly effective three-dimensional display systems exist including holography and stereovision, none of these suited the purpose of this exercise. No existing system was deemed acceptable either because they created excessively confined images or it required the use prohibitive headgear. I sought to create a three dimensional display that was inherently sculptural.

Solid-Light Prototype

The display described herein was not designed to compete with any existing alternatives in terms of resolution, color quality or practicality. It was designed as a means of taking full advantage of the high-bandwidth capacity of an output video stream through a medium shaped with materials that carry meaning through their own aesthetic. Having experimented with light pipes in the form of optical fibers on several input devices, it followed that a similar strategy could be used for output.

In this situation, the video camera is replaced by a video projector, and the fibers are replaced by solid acrylic rods. Although the square 3mm acrylic rods used in this experiment do not have the efficient optical properties that fibers do, they work effectively as light pipes over short distances. That is to say that if one end of an acrylic rod is polished smooth and the other end is polished coarsely, most of the light entering the polished end will be diffused through the opposite end, mimicking the behavior of optical fibers. The result is a material with a strong visual aesthetic that can be sculpted and has the ability to carry high-bandwidth video output.

Simulation

In an attempt to anticipate the effect of the solid-light display an application was created where video input was transposed into the solid light display in real time. To get a three dimensional image from video input, a simple interpolation is applied. In a technique devised during the Renaissance, and refined during the Baroque period, roundness of objects is achieved through chiaroscuro, or degradations between light and dark portions. The simulation software creates relief in the same way from the input video image. This is accomplished by using controlled lighting and background, and by placing only simple objects before the camera such as a person’s hand. With the light at the correct angle and intensity, the parts closest to the camera will have the highest intensity, and those farthest away will have the lowest. In this way, the image of the hand is cast into the display.

Figure 5. Simulation of solid-light display.

Following extensive testing with the simulation software, a dimension of 16x24x4 was chosen. This dimension would be sufficiently large to view identifiable objects and observe relief into the third dimension. The top ends of the 3mm square rods were trimmed at 45 degrees to point the light emitted toward the observer. The top boundary of the display was also tilted 15 degrees to enhance this effect. The rods were placed in randomly tilted positions while maintaining a regular three-dimensional grid configuration. The underside of

the clear epoxy base was polished to allow the projected light to enter.

Software

The software used to drive this display prototype worked as follows: the live input image was captured and interpolated into a low-relief (four-pixel depth) three-dimensional model using the technique described in previous sections. This 3-D data was cast into a two-dimensional matrix where the four-pixel depth was collapsed into four adjacent squares creating a disjointed-looking reproduction of the original video input. This disjointed image is then projected into the base of the solid light display where each square in the projection corresponds to a specific rod of known location and height such that the image appears in the display in correct relief.

Figure 6. Solid light display.

Observations

The result was a striking and unique display of embodied light. Effects that had not been anticipated in simulation were observed. Most notably, as the entire device is made of sharp angular objects a visual effect of a similar quality was expected, as demonstrated in the simulations. The simulations, however, did not effectively represent the diffused light that would occupy the volume of the device. The result was a soft, luminous, halo-like effect within a hostile jungle of transparent shards; a stunning and highly seductive visual contrast. Unfortunately, these very peculiar properties made it difficult to photograph the display and so the image provided here can only give a vague impression.

The success of this experiment prompts the creation of a large display of this type that makes use of the full resolution of the output video stream. Because this display has a visual aesthetic in its own right, as opposed to being merely a projection surface, it is has great potential as dynamic display in a public art

context. A freestanding large-scale version could work with the use of high-powered projectors; an integrated version could work with an light-emitting diode matrix(LED) replacing the projectors.

metaField: An Interactive Floor

This work draws inspiration from various sources. One of which is a work by minimalist Carl Andre called 64 pieces of aluminum(1969). The 64 square pieces make a 2m square on the floor of the museum that is meant for people to walk on. Although there are no moving parts to this work, it is clearly interactive in a way that most conventional artworks aren’t: you are in physical contact with the piece through your feet and you must move your body to experience it fully. James Seawright with Network III (1970) takes the next step by making the floor panels light corresponding lights on the ceiling. Myron Kruger completes the sequence with fully interactive video pieces both on the floor and the wall, triggering an era of interactive art that continues to grow in stature. The metaField addresses specific issues related to this type of interface.

Objective

The metaField came about from the desire to create an interactive environment with the following qualities:

1. The ability to accommodate one or several users. This is essential to create a fundamentally social interactive space, which is not the norm.

2. The possibility of full-body, kinetic interaction. This means that the user could not be confined to any type of equipment (mouse, keyboard, headset, joystick). The ability to move the body in an expressive dance-like behavior was primordial. 3. The potential for collaborative, mentally challenging, artistic, and/or practical applications. A generic platform was desirable in order to be able to conduct a wide range of experimentation.

4. Low threshold for engagement. This is a critical factor. The system was to require the absolute minimal amount of effort in order to engage the user. This would apply to both the physical and software components.

Configuration

A floor-based configuration seemed to have the best ability to accommodate the criteria and so a system involving a video camera and projector pointing down was devised. The projector and camera are placed high enough such that

their field of vision or projection can cover the full surface of the floor. A floor size of 3x4m was deemed suitable as it supercedes the human dimension sufficiently to allow the subject a great degree of mobility, while it remains small enough to maintain an acceptable level of granularity and brightness in the projected image for viewing from eye height of the average person. The positions of the people standing on the floor are detected by the camera and are used to control the projected graphics.

Applications

Dances with Words

This is an application where collaboration between users is the main objective. When the game is at rest, orbs with words revolving around them hover at the perimeter of the floor. When a person walks onto the floor, one of the orbs immediately finds the person and follows them wherever they go on the floor with the words continuing to circle around them. The orbs change size depending on the gestures of the person. If the person stretches their arms out wide, the orb will become large, if they hold their arms close to their body their orb become small. When two people approach each other a tension arises between their orbs. When they get to close, their orbs merge with a frenetic manifestation as if one orb is attempting to dominate the other. When the two people separate they may have exchanged orbs.

Figure 7. Dances with Words.

Aside from its playful aspect, the orbs and the tension that occurs when two of them get close together is an interesting metaphor for the personal space we maintain around our bodies at all times. Interestingly, adults using this application would immediately abandon their personal space in favor of the orbs, inducing unexpected social behavior between strangers and friends alike. On other occasions, groups of children were allowed to take part, usually resulting in an uncontrollable frenzy of activity.

Letter Blocks

The Letter Blocks application begins with a large array of white three-dimensional letter blocks projected on the floor, each with one side visible with a letter on it. When a person steps onto the floor the blocks, which are about 20cm wide rotate on themselves revealing the next letter of the alphabet on the next side that appears. As the letters turn they assume color; each person walking on it will assume a different color. People attempt to create words by moving around or attempt to convert as much territory as possible to their color. When a given area is inactive, it slowly fades back to white.

Figure 8. Letter Blocks.

The creation of the Letter Blocks application came somewhat in response to the previous Dances with Words. It was an attempt to address the unsettling, fleeting quality of the orbs. The lack of stable elements on the floor surface had a de-laminating effect upon the graphics creating an unconvincing sensation that the orbs were part of the floor. With Carl Andre’s magnesium squares in mind, a more architectural approach was taken. The blocks would be at fixed positions with respect to the existing real architecture, and they would never all be moving simultaneously. The fade-to-white after a short period of inactivity suggests that the surrounding architecture is pulling it back to its inert state. This strategy was effective in anchoring the projected images to the floor and diminishing the undesirable “painted projection” effect; thus effectively demonstrating that the content of projected images has a considerable impact on their perceived concreteness. The Letter Blocks application also introduced the notion of creating a deeper sense of immersion by preoccupying the subject with an explicit intellectual exercise. These two key aspects seemed to work in cohesion to create a more engaging experience.

Data WalkAround

This application explores the possibility of using a large floor display system to study data models. A given data set is selected and rendered in to a three-dimensional model. This model is projected onto the floor of the metaField.

When a person walks onto the floor, the model reorients itself so as to be seen in the correct perspective from where that person is standing. When the person walks around the model, the perspective view changes simultaneously such that the three-dimensional object is always seen in correct perspective, regardless of the position of viewer.

Although this application was not thoroughly successful, it was insightful. The immediate disadvantage to this strategy is that only one person can view the model correctly at a time. This may be fine for more secluded or more personal viewing environments, but works in direct contrast to the inviting omni-accessibility that is one of the floor interface’s great assets. This application also raised other issues about the feasibility of projecting three-dimensional objects. In previous models, the third dimension had only been used marginally. This application revealed that full-blown three-dimensional object requires a head on view to be effective. The low angle views of viewers on a floor-based projection quickly distort and diminish the intended three-dimensional effect.

Puzzle

In this application, the traditional sliding squares puzzle was adapted to the metaField. In this game, the image is broken up into 4 by 6 grid with one piece removed. When a person steps on a piece of the image that is adjacent to the vacant image, that piece slips into the vacant slot. In doing so the person can break up or reassemble the pieces of the puzzle to make the picture whole.

Figure 9. Puzzle.

This work had an interesting visual aesthetic but proved problematic in terms of scale. This game is interesting to play when you have full view of all the pieces, such as when you are holding the small original version in your hand. This game did not scale well to the 4x3m format for a number of reasons. The most obvious problem is that it is impossible to see all the pieces at once if you are standing in the middle of the floor. Furthermore, the low viewing angles make some of the far away pieces hard to identify. These factors make it

difficult to develop a game strategy, and thus created a frustrating experience. On the other hand it demonstrated the applicability of gaming software to this configuration.

metaCity Sarajevo

In a configuration similar to the Interactive Cinema’s Cinemat, metaCity Sarajevo combines the metaField with an adjacent, vertical projection screen. A map of the former Yugoslavia with Bosnia-Herzegovina at the center is projected on the metaField. A virtual three-dimensional model of a city is projected on the wall portion with Web pages pasted on the surfaces of the buildings. When a person walks on the territory surrounding Bosnia-Herzegovina, the region they entered on changes color, indicating a particular political inclination. When the person then moves onto the Bosnia-Herzegovina portion of the map, it changes color and motif simultaneously. As this happens one of the buildings on the three-dimensional city model rotates and moves forward to prominently display a Web page containing information sympathetic to that particular political orientation in three-dimensional space on the vertical screen. All the different directions for entering Bosnia-Herzegovina invoke Web pages with different political spins on them. For example, if one entered Bosnia-Herzegovina by walking over Serbia, the Web page displayed would be sympathetic toward the Serbian cause; if one walked over Croatia then the Croatian cause would be favored in the text of the Web site.

Figure 10. metaCity Sarajevo.

Many interesting observations were derived from this configuration. One of the most obvious yet most startling revelations was the apparent appropriateness of projecting flat horizontal imagery onto flat horizontal surfaces. It became evident that placing a large map on a horizontal surface rather than a vertical one made it considerably less abstract. The large scale of the horizontal map enhances this effect further. The second striking feature about this configuration lies in the obvious potential of merging the floor imagery with the wall imagery, suggesting to the possibility of highly

immersive applications without a high degree of confinement.

Maze

The Maze was the most successful application developed for the metaField by a large margin. It consists of a large scale, virtual recreation of a well known toy. The original version consists of a lap-sized shallow box with knobs on two sides. On the surface is a labyrinth with walls sufficiently high to restrict the movement of a standard-sized marble. The labyrinth is interspersed with holes big enough to swallow the marble. The two knobs are used to tilt the surface level on two perpendicular axes. Thus, by manipulating the two knobs, the player can drive the marble through the labyrinth while attempting to avoid the holes. If the marble falls through a hole the player must start over.

Figure 11. metaField Maze.

The metaField Maze is a large virtual version of this game that requires you to move your body across the floor space rather than turn knobs to control the path of the marble. This is accomplished by projecting a three-dimensional model of the game on the metaField floor. When the player moves in any particular direction, the model will tilt accordingly in that direction, as if it were a large model of the original game pivoting on its center point. When the model is tilted in a given direction, the marble moves in that direction as anticipated. The result is a highly engaging experience and is arguably more fun than the original laptop version.

The Maze application effectively demonstrated the strengths of a kinesthetic floor-based interface. It makes use of the three dimensionality of the projection, but projects an object that has only very shallow relief. In maintaining this shallow level of relief, the player gets the illusion of the third dimension but does not suffer from the distortion and detachment experienced in the Walkaround Data application, where the projected image went into deep three space. From this it was revealed that although the oblique viewing angles imposed by sheer geometry of the metaField configuration imposed severe

limits on the extent to which a three-dimensional object could be projected convincingly, the third dimension could still be employed effectively if maintained in low relief.

One of the key aspects of this application is that it creates a strong and instant association between the kinesthetic activities of the player and the simulated kinetics of the game. This tight bond between real and virtual properties goes a long way in erasing the technological presence of the installation.

As the conventional version of the game is so universally known, most players fully understand the concept at first glance and are eager to engage. Players who apparently had no familiarity with the game could figure it out within seconds. Thus this application effectively reduces the threshold of engagement both in its physical configuration: one simply has to walk onto it, no enclosed spaces, no special gear, and in its content: the game is either familiar, or otherwise highly intuitive.

Other interesting factors are observed. Upon engaging in this installation, the players have no choice but to develop a strategy, and then act upon this strategy by using the full mobility of their bodies. The need to engage both the body and mind in a simultaneous concerted effort has powerful effect of inducing an instantaneous and highly focused state of concentration sometimes referred to as a flow experience. Thus the player’s state of engagement is profound, further diminishing the presence of the technology involved, a highly desirable quality.

Suspended Window: a Site-based Interactive Installation

\"Suspended Window\" is a site-specific interactive art installation done in collaboration with Korean video artist Jay Lee. It attempts the fusion of physical characteristics of a site with parallel computer-generated themes in the form of dynamic computer-generated graphics.

Site and Objective

The site that provided the impetus for this project is in the Center for Advanced Visual Studies at MIT, which is on the third floor of a renovated factory building in Cambridge, MA. On the East wall of the central area is a conventional horizontally biased window, subdivided into 8 sections. Interestingly, this window is nested in a semi-wall of glass blocks, which are encased in the wall; window within a window. This odd configuration seemed to draw attention to the function of the window as a boundary between two

discrete spaces. The doubly-tiered configuration of the window invoked the notion of de-laminating the membranes of a boundary. We conspired to do so through an overlap of real, optical, and virtual spaces.

Design and Implementation

To accomplish this objective, a second window frame, of similar proportions and construction was created and suspended inside the building directly facing the original window at a distance of 15ft. The windowpanes in the suspended window were substituted for a semi-translucent material suitable for rear projection. The glass panes on the original window were covered with mirrors. A video projector was used to cast its image upon the outside of the suspended window. A video camera was mounted above the suspended window and pointed toward the real window. This video camera transmitted the images of people walking in between the two windows to the computer. The area in between the two windows was the designated interaction zone.

Figure 12. Suspended Window.

The video image was used in two separate functions: the first was to locate areas of activity in the interaction zone; the second was to infuse the images into the content of the projection. When the interactive zone was at rest, the image projected on the suspended window was of the Boston skyline; a view similar to what one might have seen looking out throughout the original window during daylight. When a person walked in between the two windows, the skyline image would break up into squares in a fluid and elastic manner in the area on the window corresponding to where the person was standing. This was accomplished by using vision software with the input video to determine the location of the people. The fragment squares were of approximately the same dimension as glass blocks surrounding the real window. During the time that the image breaks up into squares, a second image is revealed, that of a reverse image of the input video stream, thus allowing the viewers to see

themselves momentarily through a video mirror. The layering effect is enhanced by the fact that whole effect is visible through the real mirrors superimposed on the original window.

Impact

This installation invited subjects to explore a fictitious space created through the delamination of existing boundaries. The gap in between the two windows denotes the space; the optical effect created through the use of multiple physical and virtual layers in an intricate configuration activates the space. Thus, the normal functioning of the window is suspended and viewers find themselves hovering between the strata of this fictitious space. Their every movement creating organic disturbances in the layers and bringing attention to the nature and function of spatial boundaries, real and virtual.

Conclusion

This paper presented an array of inquires into the boundary of the real and the virtual by focusing on four themes: alternative interfaces, sculptural 3D displays, interaction floors, and site-based interactive art. The numerous problems found and solved, the observations made and the issues raised all merit a focused course of scientific investigation, but this task is left to others. The works produced hold together through the rational of artistic rather than scientific investigation; but can we really make a distinction anymore? The exploration of human computer interfaces between the real and virtual worlds presents such an abstract and subjective task, requiring such intense reflection on the ways we interact in the broadest possible sense that a purely empirical approach to this problem fails furiously.

Thus we shed another layer of modernism and find ourselves once again, as we did during the renaissance and numerous other renewals before and since where the artist and scientist are mutually dependent and often singular entities. Today, our traditional galleries remind us of museums, our museums remind us of mausoleums, and our art is churned out of research labs, boardrooms and chatrooms. Hence I conclude with the words of Gyorgy Kepes that anticipate this shift. He made these observations upon arriving at MIT in 1967 to establish the renown Center for Advanced Visual Studies: “I discovered with joy a different and inspiringly encompassing and more objective world; with dismay I also found out how uninformed I was about some of the great achievements of this century. At the same time, I learned to my surprise how uninitiated some of my scientist and engineer colleagues were when it came to

the most basic values of artistic sensibility. Gradually I began to see that the world opened up by science and technology could offer essential tools and symbols for the reorganization of our inner world, the world of our sensitivities, feelings and thoughts. Furthermore I came to believe that artists and scientists would be equal partners in this great transformation.”2

Acknowledgements

I wish to acknowledge the assistance of the following people for contributions made in the form of collaboration, consultancy and software. Jay Lee, Ron MacNeil, Tim McNerney, Matt Lee, John Underkoffler, Flavia Sparacino, Michael Hlavac, Daniel Stevenson, Adriana Vivacqua.

References

Assmann, Peter (1996) Book contribution . In: Gerfried Stocker, Christine Schopf (eds) Ars Electronica 96 Memesis, Vienna:Springer: pp. 394-401, 1996.

Burnham, Jack (1967) Beyond Modern Sculpture, New York: George Braziller Inc.

Davis, Douglas (1973) Art and the Future, NewYork: Praeger Publishers. de Olivera, Nicolas, Oxley, Nicola, Petry, Michael (1994) Installation Art, London: Thames and Hudson.

FitzMaurice, G, Ishii, Hiroshi and Buxton, William (1995) Conference paper, CHI'95 Conference Proceedings, pp. 442-449.

Frauenfelder, Mark (1998) Journal Article: Wired, no. 6.10, pp 164-168.

Ishii, Hiroshi & Ullmer, Brygg (1997) Conference Paper: CHI'97 Conference Proceedings, pp 234-241.

Kepes, Gyorgy (1966) Sign, Symbol, Image, New York: George Braziller. KirkPatrick, Diane, De La Croix, Horst, Tansey, Richard G. (1991) Gardner’s Art Through the Ages, Orlando: Harcourt Brace Jovanovich.

Krauss, Rosalind (1985) The Originality of the Avant-Garde and Other Modern Myths, Cambridge: MIT Press.

Krueger, Myron (1983) Artificial Reality, Reading:Addison-Wesley. Lipman, Jean (1989) Calder’s Universe, Philadelphia: Running Press.

McCullough, Malcolm (1996) Abstracting Craft: The Practiced Digital Hand, Cambridge :MIT Press.

Moholy-Nagy, Laszlo (1947) Vision in Motion, Chicago: P. Theobald.

2 Davis, Douglas (1973) Art and the Future. New York: Praeger Publishers.

Moser, Mary Anne (1996) Immersed in Technology: Art and Virtual Environments, Cambridge: MIT Press.

Popper, Frank (1993) Art of the Electronic Age, Singapore: Thames and Hudson.

Popper, Frank (1968) Origins and Development of Kinetic Art, Greenwich: New York Graphic Society.

Sakane, Itsuo, (1999) the Interaction’99, Ogaki City: IAMAS.

因篇幅问题不能全部显示,请点此查看更多更全内容