Autopoietic Architecture

This is an experimental research project to integrate computational generative architecture and theory hybridized between adaptive autopoiesis, technology, and cognition in a framework of environmental, remedial actions. It considers buildings as metabolic agents — living. The framework for this dialectic practice and design praxis samples real-world phenomenological nature, living systems, technology, and botany for data appropriate to metabolic buildings.

Tuesday, August 9, 2016

#2. Toward a Digital-Botanic Morphology for Metabolic Architectures



Seventeen days after the post below, two Datura ferox pods are almost mature and have been sectioned to accompany the CT scans from January (Post #1). They document the living stages and spaces of morphological pod development. Compared to the CT scan (Post #1), the sections show thick living cell walls, clear articulation of the four chambers for seed embryonic development, and undulating membrane walls that separate the seed chambers. We can also see a flower just before it opens in relation to the size of the nearly developed seedpods. Next stages will involve physical model building with the pods as they begin to dry and open.

Saturday, July 23, 2016

#1. Toward a Digital-Botanic Morphology for Metabolic Architectures


Last year in a contribution to the journal, Communicative & Integrative Biology — CIB — (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4594259/), I questioned some of the impact of research stemming from plant intelligence (plant neurobiology) for possibilities of incorporating living, biological intelligence into architectures. The intention was to help prompt an architectural dialectic for debating non-human intelligence in bioremedial technology and the digital production of metabolic architectures.

In recent and ongoing examination of digital morphology derived from plants, sometimes with scientific imaging equipment, I have adapted properties, forms, and structures for use in design scaffolds that support ideas revolving around biointelligent buildings. Research this summer (2016) is now reeving up as plants mature and seedpods develop and ripen. I’m documenting a 1.21x1.21m (4x4 foot) garden plot where X Datura ferox are growing. The first botanic print of the season is currently posted below in a draft state. It illustrates the leaves and a maturing seedpod (about 15 days old) next to two industrial CT scans of a dried, mature seedpod (2015 season). As you can see, the CT scan reveals the interior and exterior of the pod, as well as the dividing membrane that sectioned the pod interior into four chambers. In upcoming posts, I will section some of this year’s seedpods to track morphological states as they develop and the pods split open to disperse seeds.



 

Tuesday, May 17, 2016

Yucca glauca -- full bloom



Tower for Glasgow (right) based on the distribution of yucca glauca flowers and later their seedpods for light, view, and air circulation. On the left is one of this year's wild yuccas near my studio being monitored for morphological and performative-growth properties to contribute to botanic research as input into the generation of this year's design experiments: See also Post for 10 May below. 

Thursday, May 12, 2016

Tent Catepillar Structue. Santa Fe, NM


This is a tent moth nest in Santa Fe, NM that has one silk structure inside a larger outer structure — in relation to Neri Oxman’s Silk Cocoon project (TEDtalk) I though it interesting because the inner structure is the same material but not exactly the same form. The silk walls are membranes within membranes for heat conservation and the caterpillars move from space to space depending on temperature. . .

https://en.wikipedia.org/wiki/Tent_caterpillar

Tuesday, May 10, 2016

Yucca buds

First Yucca glauca buds for 2016 used for biomimetic research since 2005 for properties of stacking and form distribution (PodTower below) will be one of the native plant focuses for metabolic building research this summer.

Thursday, May 5, 2016

The Next Rembrandt & Can Buildings Think? Post #5









In a phone interview with Emmanuel Flores, Technical Lead for The Next Rembrandt, we spoke of the multidisciplinary teams and the multifaceted technologies involved in The Next Rembrandt. He emphasized a basic question underpinning his group's effort: "What makes a Rembrandt painting, look like a Rembrandt painting?"

With that query, Flores and team could formulate research steps, goals, and procedures. One of the first steps was to scan all 346 Rembrandt painting in order to undertake data processing and digital generation. Making high-resolutions scans was thus a primary task for unlocking facets of Rembrandt's techniques embedded in paint. However, the scans required subsequent types of analysis — face recognition for one. Facial recognition algorithms, programmed to isolate pictorial attributes and reveal insight into Rembrandt's painted mannerisms provided input for simultaneous or subsequent analysis. Machine learning could then, as stated in the project's press-release, "understand Rembrandt based on his use of geometry, composition, and painting materials" (1).

Here I like to emphasize that the learning algorithms are human artistic achievements, the creative contributions of the programmers, technicians, historians, and project visionaries. This is a critical point because it acknowledges the creative side of interactions between humans and intelligent machines, and avoids pigeonholing the project as merely an effort at replication. Machine/human interface for The Next Rembrandt then factors in data for machine learning in ways analogous to another recent AI milestone, Demis Hassabis', Google's DeepMind, and AlphaGo's victories in recent Go championships (2).

At its core, The Next Rembrandt is a learning and visualization project subjecting masterworks to questions of gestation revealed in form, color, light, and geometric juxtaposition. The Next Rembrandt is not about forging a computerized painting, it's a new way of analyzing and understanding embedded techniques and sensibility. Flores stressed the undertaking was a collaborative effort involving the deployment of algorithmic analysis and creative power dependent on human intelligence, manifested in computation, and brought forth as a tool to enhance our perceptions.

In this sense, The Next Rembrandt foregrounds human/machine challenges to decode the painter's sensibilities brushed onto canvas. Those sensibilities functioned for Rembrandt via his transference of intuition and insight to paint. By programming digital optics and machine intelligence to unmask facets of his work — encrypted essences of his paintings — the project pried open details transferable to our own insights. As a corporeal example, Flores noted that Rembrandt regularly painted the consequences of ambient light as small white highlights gleaming in his subject's eyes. Those atmospheric marks, once digitized, could be surveyed, saved to memory, and deployed by generative software as ambient traits for later phenomenological and environmental detail critical to the master's pictorial construction of depth and space.
While historians of art have been observing ambient light in Rembrandt's portraits and registering it as lending lifelike credibility for centuries, machines and artificial intelligence are only now acquiring comparable algorithmic strength to learn and thereby incorporate such information. In a dynamic turn from its cybernetic origins, AI is learning and making decisions in ways I identify as artificial sensibility. Incrementally, computational programming and codes, once considered the solitary domain of cognition and abstract human thought, are supplementing and aiding research with AI cognitivelike facility whose emergent artificial sensibility hints at metabolic AI to come. Here then, recalling Post #4 (below), lifelike digital sensibility alerts us to Turing's thought that "the machine had originated something."

1. The Next Rembrandt. Press Release.

http://thenextrembrandt.pr.co/125449-can-technology-and-data-bring-back-to-life-one-of-the-greatest-painters-of-all-time

 

2. Clemency Burton-Hill. "The superhero of artificial intelligence: can this genius keep it in check?" The Guardian. 16 February 2016.

https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago