You are on page 1of 15

Designing with natural algorithms and artificial ecologies: Some notes on a new constructivism.

Abstract
This paper scrutinises examples of computation in design culture through a constructivist philosophy, it asks how we know design or space differently through algorithmically intensive design processes? The point of departure for this enquiry is Rivka and Robert Oxmans The New Structuralism, which uses fabrication and materialism as a focus for considering computation in design. Here, we speculate computational processes are implicated in the knowing of design and space. Constructivism advances the structuralist position, providing a framework to interrogate computations influence on design. Using examples from a design studio grounded with contemporary theory, this paper extends current computational paradigms of geometric or material process; it investigates how these processesin the extremecan shift understanding. While there is little originality in claiming computation changes our perception of design, a constructivist ideology provides a framework to scrutinise its effects. Findings point to designers shifting away from geometrically complicated architecture; as we mine into the granular DNA of the built environment through artificial systems, we ask how computation disrupts or enhances design culture and how designers know place?

1. INTRODUCTION
There is an increasing recognition that space is implicated in thought. Cognitive scientist Andrew Clark supports this position and draws on compelling examples to support the argument [1, 2]. His claims are illustrated with examples of species using augmentations, what he calls environmental scaffolds to tune spatial conditions for assisting cognition. Architectural theorist Richard Coyne has drawn on Clarks work to critique virtual environments as place that affect thought [3]. Coyne argues, for example, virtual worlds function like non-places as conceived by anthropologist Aug [4], suggesting they restrict rather than encourage thought. Most recently an article in The Quarterly Journal of Experimental Psychology [5] documents experiments that reveal memory as being affected when we move from one space to another; deemed sufficiently important to the discipline of architecture that it was even featured in archdaily [6]. This article draws the discussion regarding computation in design away from a structuralist focus on form

and software. If indeed space influences thought, we ask what influence does a contemporary appropriation of computation have on the relationship between designers and their construction of space? We begin with examples from a design studio focusing on natural algorithms for the simulation of architecture, to advance the organisation of complex buildings and urban spaces. We then move onto a second set of design examples, where the city is interrogated to expose its multitude of inter-related systems and ecologies. These ecologies are modeled, creating a data intensive representation of place. We hypothesise this design process of programming has different cultural origins to traditional architectural design; as such it warrants consideration in regard to the designers knowing of space and the city.

2. WAYS OF SEEING
Firstly, let us look at the contemporary framing of technology within design and architecture. Branko Kolarevic [7, 8] has written several comprehensive anthologies that serve as timely barometers of the evolving relationship between computation and design. More recently Rivka and Robert Oxman guest edited Architectural Design: The New Structuralism [9]; where they use a structuralist ideology to explore structures such as computing methodologies, digital/analogue systems and programming languages that seem to currently drive a popular aspect of architecture. Here, we posit structuralism only interrogates part of the spatial equation. By proposing a constructivist ideology, which focuses on the question of knowledge, we complicate the Oxmans discourse. Designers are becoming more immersed in the granularity of computation through software such as Rhino, Grasshopper and Generative Components. Consequently they engage in creative practices that are markedly different from geometric drawing, which historically dominated computer aided design [10, 11]. This paper is an exhortation to reframe the questions surrounding the use of technology, as it moves beyond appropriation for solely geometric operations.

2.1.

The New Structuralism

Structuralism is a philosophical movement that emerged in the 20th Century, which claims human behaviour is determined by various structures. In 1962 Thomas Kuhnone of its notable proponents used structuralism to undermine the established view that science was progressed through a gradual increase in established facts and theories. In his seminal text The Structures of Scientific Revolutions [12]

Kuhn argues the process of enquiry is the important factor in scientific revolution, then revisits historical examples illustrating their adherence to a structure. Structuralism also emerged as an architectural movement in the middle of the 20th Century; it also privileged systems and structures as underling architectural design. Typically these were structures of function, space or circulation, The New Structuralism substitutes these with systems of parametric and algorithmic computation (Figure 1).

Figure 1: A parametric computing structure that produces geometry. Space and structure are increasingly being designed through the type of visual programming interface illustrated in Figure 1. This is the grasshopper parametric interface plug-in for the Rhino 3D modelling software. This type of interface has been used for sometime in computationally intensive software used by multimedia artists and composers such as Max/MSP (http://cycling74.com/) and Pure Data (http://puredata.info/). Although it is relatively new within architectural design, the modular methodology has already been taken to the extreme by Burry et. al. [13] on the Sagrada Familia. Burry is exploring a modular approach to parametric design much like the modular approach to programming. The advantage of this modular approach to a complex design is in enabling a general holistic understanding through having an overview of several smallereasier to understandmodules. This provides an architecture that allows for a better understanding of the overall system; enables multiple people to work on the system in clearly defined areas with minimal conflict and confusion; makes it easier to change a module and thus develop and evolve the system.

Arguably structuralist methodologies privilege the architectonics and structure of space rather than space itself. As such it perhaps has limitations as a lens to scrutinise spatial design. To look beyond design as simply making geometric forms we need to look beyond structuralism towards the tenets of constructivism.

2.2.

Constructivism

Constructivisms perspective on knowledge is that a reality cannot be known. Where a structuralist might claim the structure of a phenomenon is its reality, a constructivist would argue it is a construction of reality, but not reality itself. Constructivist learning theory has particular currency here, which claims new knowledge is constructed from experience. Thus how we experience spatial design informs our individual and unique understanding of place. Returning to the implications of computation for the domain of architecture, when Frank Gehry was famously intervied by the BBC he claimed not to use computers for design but as a checking system [14]. In What Designers Know [15] we find a proliferation of architects hand-drawn sketches, as Bryan Lawson seeks to demystify how designers know design. The sketch and drawing are romanticised in creative discourse, marginalising computation and adding currency to the notion that it may somehow interfere or adversely affect spatial design. Computation has historically not been considered a creative activity in the same way we might consider painting or sculpture. While there are suggestions that his might be changing, a tension between analogue and digital processes remains within design culture. Whether or not we choose to subscribe to this proposition, emerging designers are equally comfortable creating with a pen or through programming. How do they know space, place and design through these emerging processes?

2.3.

Test cases in computationally assisted spatial design

To help answer this question we will discuss two intensive architectural design studios, which specialise in intensive computation. The first design studio pluraform focuses on systems and the generation of geometric form. Inspiration was drawn from natural algorithms such as L-systems, which are branching structures similar to how trees or rivers subdivide. Only once a system was thoroughly investigated was an architectural form generated. The second studio, futureChristchurch, advanced ideas of naturally occurring organisation to conceive of the city as inter-weaving systems that create a complex urban ecology. We use

these studios, the designers processes and expert critique to gain insights into the knowledge and skills being obtained by the designers and the attentionor lack of itbeing focused on spatial concerns.

3. PLURAFORM & NATURAL ALGORITHMS


Let us turn our attention to the first design studio and the notion of natural algorithms. We will illustrate our discourse with examples of course work and then reflect on its assessment by a panel of expert architects. The pluraform design studio takes place within the third year of an undergraduate Bachelor of Architecture course. With a central focus on complexity, typologies, morphological processes and the scope to work at a variety of scales the course looks at how systems might be represented physically and computationally using manual techniques and software. The first part of the course focuses on understanding and representing systems, the second part focuses on how this might be applied to a design problem.

3.1.

Natural Computation

Natural computation is interpreted as taking inspiration from nature to develop problem-solving techniques, achieved through the synthesis of natural phenomenon. There are various applications of this type of computation, such as synthesising water flow, cellular growth or predicting river patterns. Relevance for architecture can be seen through the use of swarm behaviour to simulate and study crowd flow in new sports stadia design [16]. The presumption is that nature has already arrived at certain optimum solutions, if those solutions can be applied to the built environment then certain benefits will be gained.

3.2.

Artificial Systems

For the purposes of the design studio several systems were chosen and scrutinised by the class. Lindenmayer or L-systems are one example of a formal grammar simulating growth patterns of plants. Different shapes of plants emerge in different environmental conditions from identical seeds. Similarly different variables and conditions can be changed throughout the L-system; the resultant process can produce a vast array of different possible geometries. The left portion of Figure 2 illustrates such a system described in the Rhino 3D modelling software through the Grasshopper parametric interface. The right portion of Figure 2 illustrates the resultant geometry; this geometry is manipulated by changing variables in the parametric interface not by editing points of the geometric form.

Figure 2: An L-system branching parametric structure and the resulting geometry. Project by Justin Baatjes. Having understood and modelled the system, students then investigate how it might be applied to a design problem. In our first example the branching L-system was seen as being applicable to dispersal; thus is could have application to airport design.

Figure 3: Airport design derived from L-system grammar. Project by Justin Baatjes.

3.3.

Analysis of space and form

The airport design in Figure 3 developed from the system illustrated in Figure 2, at first glance the geometric form seems to be privileged in the representation over spatial development. Space is only represented in the three small images embedded in the top right corner of Figure 3. During discussions with

the student, tutor and expert panel the limited spatial development was attributed to running out of time. Readers of this article engaged in teaching will recognise this as a common exhortion for underdeveloped work and it is not without some grounding in fact. However, this type of reasoning deflects attention from a more pressing problem concerning technology and design. Which architect Juhani Pallasmaa summarises as avant-garde approaches to design not engaging with what it means to be human [17], a position he has held for fifteen years. Yet the architect and futurist Buckminster Fuller, in his manifesto Critical Path written in 1981 champions technology. In particular he singles out the potential of computation to free humanity from the need to earn a living and work at tasks of its own choosing [18]. Both theoristsin their own wayclaim humanity should be central where we are considering technology and architecture. This essentially paraphrases the philosopher Martin Heideggers claim that reflection on technology must be from a different realm [19]. Heidegger argues when technology banishes man into that kind of revealing which is ordering that it drives out every other possibility of revealing [19]. Thus a human centred analysis of computing in architecture is essential if something pertinent or of interest has the possibility to bein the words of Heideggerbrought forth. Returning the design studio, couched within this theory, perhaps what is of interest is the imbalance between technology and humanity. The word humanity is used here in a very general sense, to mean giving consideration to human activity. The computation of pluraform takes centre stage and the design process ends before any socio-cultural or phenomenological consideration of space is broached. Lack of concern for the phenomenological was echoed during the final design review by a panel of architectural experts. Two consistent themes emerged from the review, under-developed spatial consideration and underdeveloped relation to the urban context. These are not uncommon critiques of geometrically and computationally complex designs [20]. Buildings by Gehry, Eisenmann and Zaha Hadid have, at different times, faced criticisms from a utilitarian or spatial perspective. However, we would argue this is symptomatic of a problem that has taken root deeper within contemporary design culture; humanity is subservient to over-intellectualisation and computation. Arguably, in both educational and commercial design environments, there is never enough time. It would appear from our observations of design that humanity does not command the strong position found in the mandates of Fuller and Pallasmaa,

consequently there is a limited meditation by students on the human condition. This imbalance can be seen established as a pedagogical norm and later reinforced through the constraints of commercial design practice. Judging by our study of the pluraform design studio, from a purely structuralist perspective computation produces exciting form. This resonates with Donald Normans differentiation between complicated and complexity [21], which contends we do not actually like simple. Instead Norman suggests we require a degree of complexity to maintain our engagement and interest. So although the creative process we have described here was complicated, the design lacked complexity and thus did not engage a panel of experts for long. Using constructivism as a lens to scrutinise design reveals the problematic of the process, which Pallasma [17] or Fuller [18] would claim is devaluing human and spatial consideration. This perhaps adds currency to the claims of computing adversely affecting design. While it can superficially be framed in terms of not enough time, it is symptomatic of a situation grounded much deeper in the intellectual culture of design and technology; emergent designers are being obstructed from engaging with both technology and humanity. The pluraform studio perhaps provides insight into why computing, specifically programming, is perceived as creatively problematic. However, we will now turn our attention to the second studio futureChristchurchwhere the primary course underpinnings remained focused on humanity and place. Observational evidence perhaps provides some insight into the opportunities intensive computing might offer design culture.

4. ARTIFICIAL ECOLOGIES
The futureChristchurch design studio took place in the first year of a postgraduate Master of Architecture course and advanced the themes from the pluraform design studio discussed in the previous section. futureChristchurch asked students to consider the city as a complex ecology of many intertwined and interrelated systems. This is markedly different to current methods of city analysis, where aspects of the city are singled out and removed from their complex multifaceted context. Although this abstraction makes it simpler to understand a particular aspect of the city, it prohibits any sensitisation to the manifold of attractions, flows and resistances that occur between the citys multitude of systems. To paraphrase Donald Norman, paradoxically, simplifying can render understanding complex systems more difficult.

In this section we expand on futureChrustchurch, which focuses on mapping complex data whirlwinds that make up a citys natural and artificial ecologies. We go on to analyse examples of this process, again with the help of expert critique we reflect on the implications for design practices.

4.1.

Data Gathering

There are vast amounts of information available on the urban environment: suburb population densities; domestic, industrial and commercial land usage; soil type, depth and acidity; pollution indexes and air quality, to name but a few. This information is typically stored in a variety of analogue and digital media, such as databases secured by different government or council offices or annual reports. Access is bureaucratic and time consuming, when obtained the information usually requires reformatting before it can be reused. The political situation that existed in Christchurch New Zealand following a series of devastating earthquakes resulted in the political will to make this information available. Underpinning the futureChristchurch design studio was the question, what can we do with this data to benefit the design process and the future of the city? For the design theorist this situation is coupled to the realisation online inhabitants are populating public and semi-public databases with information about daily routine and activities. From web services like flickr and twitter it is possible to harvest information and glean insight into the life of a city on a particular day or over an hour. In parallel, advancements with computing simulation have resulted in the production of optimised design solutions and deterministic decision-making. Where computational power is used to provide a simplified answer to, for example, energy consumption. The FP7 funded BRIDGE project, a tool for city planners, is one such application [22] which computes which design option will be selected based on energy efficiency. How can, or should, the profession leverage information technology to advance design beyond deterministic decision-making?

4.2.

Processing micro-urban place

One group of students, the micro-urban group approached this problem by mapping some of the available information into computation software. In the absence of an obvious design solution at an early stage, the group opted not to invest time translating the data into rigid 3D form. Rather they devoted time and effort to encoding the data so it could be repurposed in the future, in this case using the Processing software

(http://processing.org/). Micro-urban had access to a vast array of data, 25 datasets were available on Christchurch covering a wide spectrum of environmental data from soil acidity, soil type and depth to pollution and wind direction. To give some idea of the intensity and information density available, one single dataset is illustrated in Figure 4. High-resolution photos of the city were also made available, which the group exposed to pattern recognition algorithms. Through this process it was possible to identify and encode land usage and foliage configurations.

Figure 4: Processing encoded data set for 'soil rooting depth'. At this point it might be argued that futureChristchurch is unfolding the same as pluraform. However, in this instance by way of the described computational processes, the students gained a considerable knowledge of the citys intimate workings. For example, when questioned on smog, the micro-urban group was able to articulate the combination of environmental conditions needed to align for its creation. The potential then exists to configure a city, using this complex model, to avoid or minimize the probability of these conditions aligning. At this stage programming methodology and data manipulation overwhelmed spatial consideration. However, by programming environmental city ecologies the group gained a deep understanding of these systems and the influence they exert on the city and each other.

4.3.

Agents of change

Added to this environmental data whirlwind the group also had access to pre-earthquake configuration of the Christchurch built environment. Data was available on the amounts of commercial, domestic and industrial floor areas in each city zone. Working with software that privileged processing rather than form, they now deployed independent intelligent agents, which represented particular typologies of land usage, such as parks, commercial buildings and domestic houses. In a departure from current ideologies on simulation, the agents did not seek optimal conditions. Instead they were looking for better or inferior conditions, from which they would iteratively move either towards or away from. A domestic building agent, for example, was given a series of conditions such as distance from other domestic or commercial agents, good soil quality, close proximity to parks and so on. The agents were positioned conforming to the pre-earthquake makeup of Christchurch and activated to seek out better conditions for futureChristchurch; a resultant ecology is illustrated in Figure 5. This is a morphological model that was ever changing, a choice that was informed by an opposition to the deterministic nature of technology within contemporary design. With a newfound sensitivity to the complexity of Christchurch this recursive model afforded observation of the affect their decisions would have on the shape of a new city. This in turn enabled the group to reflect on the decisions, choices and preferences they were privileging through their programming process; causing a deeper reconsideration of values rather than a superficial consideration of geometry or aesthetics.

Figure 5: Future Christchurch agent simulation by Micro-urban. Couching our examples within the theories of Pallasmaa and Fuller, there are a number of notable differences between pluraform and futureChristchurch. Unlike pluraform, from the beginning of the Christchurch project issues of humanity were as present as issues of computation. In the micro-urban group this resulted in a constant questioning and tension between the two. Which strongly resonates with philosopher Martin Heideggers assertions that technology must always be critiqued through another discipline [19]. Even in this modest project the data became complex and occasionally beyond the programmers. The micro-urban group found themselves watching unexpected organisational patterns emerge. Resulting in intense debates amongst the expert panel of reviewers to understand why the agents were flocking in particular ways. The investigation did not result in spatial design; rather it produced a starting point for design or debate. A series of city organisational outputs were generated that used the existing ecologies of Christchurch as their starting point. In principle futureChristchurch used natural ecological systems as the starting point for the city. Which is ideologically in opposition to modernist approaches to city plans, romanticised by many architects of which Corbusier is perhaps the example par-excellence. Corbusier perceives buildings as machines [23], suggesting they operate under a set of distinct internal operations,

separate from external forces. For the social critic Marx the process of technological change or automation is also less about the potential prosperity it might facilitate in the form of more cost effective products, rather he argues that automation transforms the workers operations more and more into mechanical operations, so that, at a certain point, the mechanism can step into his place [24]. By appropriating the machine micro-urban have subverted a central tenet to modernist design culture and a current concern regarding the implication of computational optimisation for the design profession. Instead they created a platform to leverage data processing to advance their design process and used computation to create built environments that resonate effectively with natural environments. In the appraisal of the design, even without a spatial design solution, a deep understanding of the complexities of place, if not space, were observed by the panel of experts. An understanding of the citys dynamic nature and the manifold of variables that exert influence on created spaces. In futureChristchurch any claims of not enough time became redundant. Although considerable time was invested in learning and computation, it was time spent on programming environmental conditions, values and specifications. The subsequent decisions and behaviors the group deployed in their intelligent agents did not merely change geometries, it reorganized a city; within their design and computational processes the group were never far removed from the city of Christchurch.

5. SUMMARY
While the first pluraform studio focused on systems and methodology, the second futureChristchurch studio focused on a real place, the city of Christchurch. While the first projects seemed to result in geometric exercises the second studio produced compelling city plans. There is perhaps credence here to the often-quoted Frank Lloyd Wright who said never design a house until you see the site and meet the clients. In our future Christchurch examples the site literally provided the data that became the starting point for design. Rather than city plans being driven by arbitrary geometrical or political motivations, here the natural ecology of place becomes the starting point for prototypical computational design processes aimed at creating an artificial urban ecology that resonates with its natural environment. Whether or not the simulated systems were accurate representations of the city ecologies is perhapsfrom a constructivist ideologynot as important as the designers knowing of the complex inter-relationships of the city and its occupants, a knowledge gained through intensive programming. Small changes to the

system produced dramatic changes to the macro-scale ecology and organizational geometries. In the first studio this was a novelty, however by the second studio this pointed to the dramatic influence small decisions can exert on city infrastructure and the impact on spatial design factors. Causing the designers to reconsider their decisions and logic of their value framework. The software created by micro-urban is granular enough to continue to refine the conditions and rules. While the results of the first studio were overtly deterministic, the same could not be said for the micro-urban project that emerged from the second studio. The computation did not output designed space but rather produced a richer representation of place providing a potentially deeper understanding of Christchurch. Future work being discussed could include individual agent buildings redistributing themselves in response to shifting population density or changing environmental conditions, or proposing potential interim organizational options to assist decision making in evolving from an existing city configuration into the desired future city. Such design decision support systemsto be meaningfulwould continue to be contingent on the designers knowing of the intimate workings of the city and its spaces; knowledge that was gained, in our examples, through computation. Observational evidence suggests there is a need to retain focus on real space and place to ensure designers continue to know place and the ways computation can be leveraged to enable design. In the micro-urban project this knowledge was gained through intensive computing and programming. The result was neither a naive deterministic design solutions nor the supposition of total creative freedom with computation ameliorating the brittle practicalities. This evolving design culture has instead revealed a more challenging relationship being consolidated between design and computing. It has revealed where design might be constrained to ensure the longevity of the city as a safe and sustainable proposition andperhaps importantly for design and computing culturewhere the opportunities for design freedoms lie that wont compromise these objectives. While there may not be a prescription here for a computational methodology for assisted design, it perhaps points to a shift in the dominant paradigm of design computing. It did not follow Marx ideology to simply automate and takeover established design activity. Instead it has embraced Buckminster Fullers challenge and provided some insight into how technology might augment designers and leverage their knowledge in the creation of future spaces.

Acknowledgments
The authors would like to extend their gratitude to the designers of the work used to illustrate this paper, Justin Baatjes, Jordan Saunders, Adrian Kumar and Yun Kong Sung. They would also like to thank Erich Prem and Julian Padget, would informed some of the themes in this paper.

References

You might also like