Wednesday, 11 March 2009

The final project: A proposal

The final project for this Agile module is to create a work based on the previous workshops that approaches the concept of Agile: architectures for the near future. I have had a number of different ideas relating to this topic discussed in the synthesis of the report. The most obvious possibility for this project would be to continue the vision system animation that was produced during the Territory project. This is because this project was not fully functioning and was not implemented so there is a lot of room for development and improvement. This project was also quite enjoyable and interesting so it would be a good project to continue further.
As well as this there is also the concept of investigating a connection of GPS narrative elements with the GreenScreen and the Arch-OS which could be quite dynamic and interesting, or there is the option of investigating social ecology through virtual environments. All of these elements would be interesting to develop, specifically these new ideas as they would be new and a development of what was done previously. These ideas will need to be developed further however to consider whether they would be appropriate.
Ultimately for my project I would like to encorporate some use of the Arch-OS or GreenScreen or both as I found these the most engaging section of the module. However I would like to try to use these in a more abstract way rather than just taking the Arch-OS data and visualising it on the GreenScreen as this has already been approached a number of times before.

Monday, 9 March 2009

The Workshops: A Synthesis

Introduction
In the three workshops a wide range of different and new concepts have been introduced. The aim of the synthesis is to combine all these different ideas and consider new possibilities that come from this. In particular there may be inspiration for the final Agile project based on combining different elements of the three workshops. In this synthesis I will draw separate conclusions of each workshop and then outline different ideas which have stemmed from the experiences I have had.

The Picnic
The Picnic was the first workshop which introduced a number of leading aspects for the module. These included abstraction, reconstruction and gave us the ability to look at information from a different perspective. The Picnic was a steep learning curve in terms of the more open ways of thinking and viewing the environment, but this in some ways made it more engaging. This was also a benefit for the future workshops as these concepts such as abstraction have seemed to run throughout the workshops and so by introducing us to them at an early stage it made it easier to carry out future tasks. In terms of the ecology aspect of this workshop, both social and human ecology were covered which gave a broader spectrum of the underlying concept of the module. The most engaging aspect of this workshop was discovering ways in which the data can be abstracted, as although I knew of this concept, at first I found it difficult to put it into practice and so working through it helped to develop my creative skills.

The Field
The Field was the second workshop that was covered involving hertzian space and locative media and was focused on human ecology. The main focus of this work shop was the GPS track and ways in which to develop it from a simple GPS drawing to a more dynamic outcome. In particular I found the hertzian space element of this workshop interesting as it allowed me to investigate a query that I had previously considered, whilst developing this further into a spatial understanding of my surroundings which was relevant to the task at hand. This particular section had little development from the initial investigation, unlike the GPS section, and so there is a possibility of continuation in the future.

The Territory
The Territory was the third and final workshop and revolved around the Arch-OS system. Out of the three workshops I felt that this one was the most informative in understanding and developing ideas for its ecological theme of deep ecology. The most interesting element of this workshop was the Arch-OS section as there are numerous possibilities for the data it provides and the concept of intelligent architecture is a new and exciting area to explore. By studying the Arch-OS system it introduced a new perspective of the building itself, which others who are unaware of Arch-OS would not see. The discovery of this understanding is also one of the reasons why the GreenScreen is interesting to work with, as it deals with the concept of people having to look closer and unravel the information to understand what it really resembles. The philosophical element to deep ecology also allows for this section to be more flexible in terms of concepts and ideas.

New Ideas
By combining the information learnt during these workshops a number of new ideas can emerge. These ideas can then widen our investigation, but more importantly can be used to develop a concept for the final Agile project.
Firstly there are ideas such as those previously voiced in this synthesis. In particular I feel the investigation of hertzian space not involving GPS was left undeveloped and so there are a number of ways in which this section could be continued. Firstly hertzian space could be investigated further using time. The maps made at the beginning of the Field using wi-fi and Bluetooth were static images, but the position, strength and other characteristics of these frequencies change over time. Therefore alike to the GPS narrative the hertzian space could be studied and the constructed over time. This would move the herztian space element away from the space-based approach and more towards a time-based approach, therefore linking elements of the Picnic with the Field.
Another idea that has emerged from carrying out these workshops is a way of combining elements of the GPS section of these workshops with the GreenScreen from the territory. Previously the GPS elements had either been defined using a static image or a simple video narrative which demonstrated the development of the path. However these elements have only been viewable through computer and this blog, whereas with use of the GreenScreen tracks could be displayed to the public. As well as this if some of the Arch-OS data could be used to somehow produce the GPS element of the concept the final image would be more relevant as a whole.
The ecologies covered in this module also introduce new ideas. In particular it occurred to me that social and human ecology is quite difficult to separate as whenever people are interacting socially they are also interacting with their physical environment. This therefore made me question whether it was possible to get social ecology without the interference of human ecology. For this to really be possible the social interaction would have to occur in a non-physical environment. This made me consider virtual social environments such as MSN and Facebook, as these would allow for a social environment without any involvement of the physical environment. This could be an interesting area to investigate as it approaches a completely different type of ecology in terms of social interaction.

Conclusion
Overall the Agile workshops have been very effective in introducing and developing new concepts relating to architectures of the near future. Although certain elements were difficult to understand to begin with, by working through the projects and looking at examples of similar work a better understanding began to develop. This module has been an interesting developmental process that I have enjoyed and has introduced a wide range of different techniques and strategies. The final stage of this module is to take these ideas and create a final project with them that will be interesting and effective.

The Territory: A Synopsis

The final workshop covered by this module is entitled The Territory and focuses on the concept of deep ecology. Deep ecology places more focus on the non-human elements of our environment such as ecosystems and species and describes how the living environment should also be able to grow and interact in the way that the human species can. It also suggests that the the environment should not be used purely as a human resource, but should have its own purpose and reason for being. The study of deep ecology can at times be a more philosophical approach to the environment, looking at the 'whys' and 'hows' of our existence and impact on the environment, rather than simply looking at the surface of ecology (our interaction with the environment) alike to human or social ecology.
In order to investigate the concept of deep ecology the Territory is based on the Arch-OS system. This is a computerised system set up in the Portland Square building of Plymouth University which monitors different characteristics inside and outside and collects data about them, which can then be manipulated to demonstrate the changes in the building. This in some ways can be considered the life of the building and therefore ties in with the concept of deep ecology, as the building is being viewed not as a resource but as a separate living object which can interact with the environment in its own way.
There is and overview and three main areas covered by the Territory which are listed below. To find out about each of them click on the link which will redirect you to the relevant post:
  1. Live Streaming - overview
  2. Video Streaming
  3. The GreenScreen
  4. Arch-OS

Once we had investigated each of these elements we were asked to carry out a small project in which we could use some or all of the elements to create an interesting streaming experience. For my project I decided to work with both the GreenScreen and the Arch-OS data to create a visualisation for the movement of people through Portland Square. To find out more about this project and to see the final result visit the posts below:


  1. My Streaming Project
  2. Seeing and Dreaming
  3. The Building's Dream

Overall the streaming project was quite successful as it created an intriguing and dynamic image. In particular it was successful in terms of the context of the project as it was designed for the building and personified it in a way that matched with the concepts set out by deep ecology. However its main downfall was that it was not taking the data directly from Arch-OS due to the vision system not working. This made the animation less relevant and did not give a sense of the real-time life of the building. It is not impossible to consider this being implemented properly though, and so the product that was produced at least acted as a successful demonstration of what could be created.

Sunday, 8 March 2009

The Field: space-based narrative

After creating the GPS drawing using GPS devices and visualisation software, and also documenting the area in which the drawing was obtained, this information then had to be accumulated and turned into a form of narrative. This could be through a number of forms including text, sound and video. An example of narrative work that relates to this is work from Janet Cardiff. Janet Cardiff produced what she called narrative walks where she narrated the path that she took and what she saw. The aim of these was to follow the directions elsewhere which may result in similarities with the original path. This was a path narrative made using sound. For my narrative I decided to animate the documentation to show the exact path through the images that was taken whilst overlaying the path as it was created. To view this video and more information on the creation of this space-based narrative visit Narrative for GPS Track.
This space based narrative is quite a successful representation of the GPS drawing made. The main element that leads to its success is the relationship between the line that is drawn on the ground and the actual GPS drawing that is placed on top. Without the track overlay the viewer of the video is unaware of their distance or location along the track. However with the drawing laid on top it is possible to know precisely which part of the path the video is currently displaying. The book element of the video is also quite good as it gives the impression of the images developing linearly through time along with the narrative similar to the contents of a book. However the downside of this narrative is that it is not the most original way of displaying the information and there may have been more abstract and interesting ways to create the narrative.
The narrative is overall an element that, in this case, encompasses the rest of the GPS elements previously obtained throughout The Field. This makes it quite an interesting construction as it not only shows the different elements but highlights the relationships between them.

The Field: space-based processes

Once the GPS drawing had been created we were asked to document the path that was taken therefore covering the space-based processes of the space-based drawing. This reverted back to traditional forms of documentation such as notes and collage as a method for displaying the processes made in creating the GPS drawing. For my documentation I decided to photograph sections of the path I took and then put these into a collage. To be able to relate the collage to the sections of the path I placed the images along a drawing of the path I took. This ensured that the documentation was relevant to the drawing and process to which it referred. The collage I produced is as follows:

To see more about this collage go to GPS Path Photos. Although this collage does show all of the important sections of the path that I took I do not feel it is all that effective as an image. The page is too cluttered in some areas to be able to read properly and it is in some ways difficult to know what order the images were taken. This therefore is not as effective in documenting the path taken as it could be. It may have been better just to lay the images out linearly so that each image could be viewed and the order they were taken would be obvious. However it is a thorough documentation of the areas that were covered in the creation of the GPS drawing so there is a lot of information available from the image.
Overall the space-based processes section of The Field helps to inform further the space-based navigation section and to provide a better representation of the events that occured during the navigation task.

The Field: space-based navigation

The second practical session of The Field is labelled space-based navigation which relates more closely with the concept of locative media. In particular this part of the project dealt with GPS tracking through the use of GPS devices. By tracing your movement with a GPS device you are able to draw images onto the landscape, almost mimicking a virtual graffiti. The task was to create an image using a GPS track within the space around the university such as the GPS drawings created by others around the globe. After looking at a number of examples such as that shown right I decided that I liked the idea of being able write text onto the landscape and so began to design my own drawing. The image that I was to create had to be possible within the paths around the university and so I chose to use a map of the university to define my image. I also felt that my image should relate to the task at hand as this would make the drawing I made more relevant. The drawing I created shows the word GPS drawn out with the pencil following and is as follows:


The only trouble with the actual track that I made was that the GPS device drew the track as a series of points rather than a line and so I had to overlay a line on the image to make the image more readable. Apart from this the track that I made was quite successful and included a detailed navigation of the space available. For more information about creating this track view the following posts:

  1. GPS Drawing
  2. Flat GPS Track

This space-based navigation developed the previous elements of the Field by introducing the concept of using and manipulating hertzian space in order to create images that only exist in a virtual environment.

The Field: space-based mapping

The first element of The Field was to investigate further into the concept of hertzian space by carrying out a space-based mapping exercise. The task was to select a hertzian frequency to investigate and a location in which to study and then map the frequency at that location. The size of the location was dependant on the frequency chosen due to the range that each frequency could span. Similarly the ways in which the hertzian space could have been investigated was dependant on the frequency due to the technologies that could investigate each. For example bluetooth can be investigated using mobile phones, while wifi can be investigated using stumbling software on computers.

For my space-based map I first chose to investigate wifi using my laptop in my accomodation at Alexandra Works. I thought this would be an interesting hertzian space to investigate as I have a wireless network in my room, but I was intrigued as to how many other students in the building decided to do the same thing and what other networks were accessible. After investigating the building a number of times with my laptop I discovered a surprising amount of wireless networks and then made the map to portray their location and range(shown left). This demonstrates an interesting use of the hertzian space around the building, especially in areas where the signals overlap as the hertzian space is shared by more than one signal at these locations. To find out more about the production of this map view the post Hertzian Space.

However this space-based map was not all that accurate as the actual ranges of each of the networks were likely to be more obscure due to the interference of the building. I therefore decided to do a second map using bluetooth in the Roland Levinsky building of the university. The map I produced is shown right. This was an interesting area to investigate as the movement and range of the Bluetooth devices could be monitored due to the recognition of Bluetooth names on different floors. It also demonstrates where the hertzian space is most dense with Bluetooth signals and where it is not. To find out more about this second map visit Roland Levinsky Mapping.
Overall the space-based maps I produced were both realtively successful. By investigating the areas described it introduced a new way of viewing the space in which technology reaches as well as a different way of viewing the environment as a whole. It demonstrated an interaction with the environment relating to human ecology which may previously have been ignored and increased the awareness of the hertzian space used around us.

The Field: the beginning

The second section covered by this module was entitled The Field. The focus of The Field is human ecology as it investigates the different ways in which we interact with the environment as human beings. In particular The Field is space orientated and involves the interaction of technology with the space around it. The aim of the section was to introduce different ways to access, percieve and construct space as well as map and manipulate the space around us. To begin The Field we were introduced to the concept of hertzian space and locative media.
Hertzian space is the concept of the space that is taken up by hertz based frequency transmissions. These transmissions are invisible to the naked eye and so are not ususally considered as filling any space at all. Examples of these transmissions include wifi, bluetooth, radio, tv,and mobile. All of these transmissions are broadcast through the air but due to the lack of any visual physicality are not considered to be using space. These transmissions however do hold a physical form, we just are not able to see the electromagnetic frequency at which they are viewable. Left is an example of a visualisation of the hertzian space based on mobile phone calls using balloons. By studying hertzian space a new view of the environment can be discovered in which the atmosphere is full of different hertzian materials.
Locative media on the other hand is media that communicates information at a defined location and technologies such as GPS enable locative media to function. The term locative media was created by Karlis Kalnins in order to distinguish between technologies that dealt with locations creatively and location-based services. By using locative media, people are able to investigate and experiment with the space around them in a new way.
By dealing with locative media and hertzian space through The Field project a new understanding of our environment in terms of technology and frequency can be discovered and investigated.

The Picnic: the picnic mat

The final part of the Picnic project was to take the time-based drawing (notations), print them off onto A3 paper and mould the paper to resemble the information that it contained. The aim of this was to convert the whole drawing to 3D (similar to that of the time-based model) but this time finish off with an actual 3D object which displayed all of the information. This 3D object can in some ways be described as the picnic mat, as it is where all the observations made are located, similar to the mat of the original picnic. Also by converting the 2D map into a 3D model it places the observations back into a physical environment alike to that in which the observations were first made. This continues the reconstruction of the information from a different perspective. The 3D object I created is shown below:



This model was designed so that the peaks of the object were where the highest frequency of the notations occurred, while the troughs of the model were where there was little notation. On the non-printed side however these characteristics are reversed. For more information on this model see the following post:
  1. Paper Model of picnic
The main criticism to be had about this model however is that the characteristics are quite difficult to identify through images such as those above. In the physical form the different folds are more easy to read, and so to make the key folds easier to recognise via images I have made the following illustration:

This also demonstrates the reliance of the observations of the existence of people in the image as each of the key peaks in this image are located at the points in which the people were located in the original photograph. This highlights an interesting relationship between the environment and the existence of civilisation, as a lot of the possible elements of the environment are most often sourced from humans, such as sound and movement.
This model almost acts as a complete reconstruction of the original picnic, but is however missing a vital element. The original picnic operated in 4D in that it occured over a period of time. The above 3D object however does not encorporate time and so cannot be described as a complete reconstruction. This final element however was not required as part of the project. The final 3D object created provides an abstract and intriguing perspective of the picnic, and acts as a viable conclusion to the project.

The Picnic: time-based model

Once the notational section of The Picnic was complete the next stage was to create a model based on the interaction that was studied. This model could then be photographed and put back into the the notations as an image. By doing this 3D elements would be incorporated back into the map of the picnic, therefore linking it more directly to the original physical picnic and the time-based photograph. The notations alone acted on a purely 2D basis by deconstructing the picnic photograph into a flat illustration. By placing images of a 3D model with the notations the time-based drawing would become more dynamic and would develop the image to become more closely related to the original event.
In order for us to fully understand this concept examples were given of previous student's work that was completed for tasks similar to our own. The students had to study an interaction that they made with their environment that was personal to them and then model an object that would develop or improve this interaction. Specifically they had to relate their model to a body part that was linked to the interaction. Left and right are two examples. The first is a model based on the neck relating to make-up and hair routines. The second is a model of the movement of the leg.

These models demonstrate the abstraction of the original interaction into a physical reconstruction of the event from a different perspective. This is similar to the aim of The Picnic in that the original picnic was to be deconstructed then slowly reconstructed to produce a map of the picnic that showed it from a different perspective. The time-based model therefore acted as a part of this process, similar to the moulds and models made in the above projects.
The interaction that I had chosen to focus on was that of the hands within the picnic, and so for the model I made a mould of the back of my hand. I particularly made the knuckles more prominent by moulding a clenched hand rather than a flat hand, as this made the mould easier to recognise and a more interesting shape to look at. The following are images of the model that I made:




Once I had moulded the model of my hand and photographed it the images were placed over the hands of those in the time-based photograph as a final notation modelling the interactions that were made. The final notation therefore was as follows:


For more information on the production and development of the model and notations see the following posts:


  1. My Cast and More
  2. Almost completed in 2D
  3. Final Notations

By incorporating the 3D element into the notations the interactions that were studied become instantly more prominent. The model also adds a sense of depth to the notation due to the placement of the model matching that of the time-based photograph. This is an interesting observation as not only does the inclusion of the model add a 3D object to the image, but it also translates the whole notation from a 2D perspective to a more 3D one. This demonstrates that the inclusion of the model was more effective than first intended, and therefore more successful.
Overall the time-based model allowed us to investigate further the ways in which we can interpret our interactions with the environment and develop them into a more abstract reconstruction.

Saturday, 7 March 2009

The Picnic: time-based drawing

Once the time-based photograph had been constructed the next stage was to create a 'time-based drawing' from it. This meant taking our photograph, extracting the information from it and the notating this information in a different and abstract way. Five different types of information had to be covered which could be extracted from the image and then notated. Examples of these were things like light/dark areas and fast/slow movements. The five covered in my drawing were light/dark areas, hard/soft areas, loud/quiet areas, fast/slow movements and the time based aspect; a numerical representation of the order in which the photos were taken. In order for us to get a better understanding of how to take the time-based photo and notate it a number of examples were given of ways in which information can be visualised. One leading example was that of Edward Tufte's work into visualising data.

Edward Tufte’s work revolves around the visualisation of data and information, but in a way that is more effective and efficient in portraying the information to the audience. This means using unconventional methods of communicating information i.e. not by using basic graphs and tallies, but by drawing stimulating and representative images to portray the same information, and by using as little notation as possible . This demonstrates an interesting and unique way of looking and dealing with information. Items of his work often include environmental elements although the basis is set on the modelling of data. By looking at works such as Tufte's this gave us inspiration into how to notate our images in a way that displays the information within it. The following is the final notation images that I produced during the project:



These notations also show the model section of the Picnic discussed later in The Picnic: time-based model. To find out more about the construction of these notations select the links to the posts below:
  1. Further Developments
  2. My Cast and More
  3. Almost completed in 2D
  4. Hard and soft is complete
  5. Final Notations

The time-based drawing that I produced actually was quite successful. To begin with for this part of The Picnic I was finding it difficult to find interesting and unique ways in which to notate the environment. This was mainly because of the concept of abstracting elements of the environment that are not usually paid any attention such as light and sound. However the more time spent experimenting and looking for ideas, the easier I found it to make the necessary abstractions resulting in the final notations shown above.
Out of the notations that I produced I particularly like the grid used to display light and dark areas, and also the dots used to portray hard and soft materials. These are the better notations as they are better abstracted from what they related to. This makes them more interesting as their purpose is less obvious. They also add interesting shape and colour to the drawing as they are more wide spread characteristics that do not rely on the people in the photograph.
Overall the time-based drawing was a revealing and developmental task that was vitally important in learning to think more abstractly about the task at hand. Not only did it help in terms of future projects but also in terms of the ecological factor of the Picnic. By abstracting the photograph of the Picnic it highlighted elements of the environment which we interact with that we may not have previously noticed, therefore enhancing the human ecology aspect of the Picnic.

The Picnic: time based photograph

Along with demonstrating elements of works for ideas and inspiration the beginning of the Picnic section also included the introduction to the Picnic Project. The aim of this project was to first document specific social interactions of your choosing using photographs, and then develop this into a map of the interactions that occurred. The process of the initial documentation was titled the time-based photograph as the photos were to be taken over a period of time, and also demonstrate the changes in interactions over time. The social interactions studied had to be specific aspects of people's behaviour such as gestures or expressions, which meant focusing on photographing particular body parts. For my time based photograph I chose to focus on the hands as these are main means of communicating other than speech and so would be quite dynamic. The posts covering this section of the picnic are listed below. To view these posts click the titles:

  1. Picnic Photos
  2. Collation of Photos

The final collage that I created resulted in the following time-based photograph:


I think this collage is quite successful in portraying the social interactions that occurred on the Picnic. In particular a variety of hand gestures are portrayed, demonstrating the focus of the image. The image however has not lost any readability due to this and so the situation can be easily assessed by the viewer. The composition of the image allows for the various movements and gestures to be displayed without crowding the image. As well as social interactions it also demonstrates interaction with both natural and man-made objects within the environment, which directly links with the theme of both social and human ecology covered in this module.
The time-based photograph was an interesting and engaging way to begin this project and also the module overall. By using a practical and social event to launch us into the project it allowed us to instantly get to grips with the mind set needed for the project whilst also acting as a useful introduction to the people and places around us.

Friday, 6 March 2009

The Picnic: the beginning

The Picnic: human ecology is the first section that was covered in this module and was therefore also the first step in approaching the concepts defined by Agile: architectures for the near future. The aim of this first section was both to introduce the module as a whole and to introduce the subject of human ecology. Human ecology outlines the interaction of humans with their environment, and more specifically their physical, spatial and temporal relationships with it. In particular the temporal element is fundamental to this section of the module as it acts as a measure by which the human ecology can be measured. This section can therefore also be considered time-based.
The study of The Picnic began both by the introduction to our Picnic project and the presentation of previous and related works that demonstrated ways of visualising interactions made over time. Some of these were directly related to human ecology while others were more generic, relating to ways of visualising information. In particular Edward Tufte and his works were highlighted as good guidance to what we were trying to achieve during our project and as general inspiration (for more information see The Picnic - time-based drawing). By introducing us to ways of dealing with information such as that used by Tufte this enabled us to better understand how to begin to approach the module, and helped to encourage us to think in a more abstract and dynamic way. Tufte’s work along with that of others acted as an initial influence to kick start our ideas and creativity, as well as being an important source to be able to refer to in the future.

The Report: An Introduction

The main title of this module of study is Agile: architectures of the near future, and in particular the idea of ecology. This deals with concepts involving the production of structures and forms relating to the changes and development of the environment around us, allowing us to see and understand elements of our environment that we may not have before realised. By gaining this knowledge we are able to use it to redesign and add to the environment in a way that makes it more responsive. The explicit use of ecology as a focus highlights the importance of interactions with the environment in developing and understanding these concepts. The aim of the module as a whole is to introduce ideas relating to this area of study, and then to use the information to produce a final project which will demonstrate our understanding and develop a unique and interesting outcome. In this report the ideas and elements of this module will be discussed, along with the various projects that help to implement and experiment with these ideas.
In particular there are three sections of this module that will be covered. These are The Picnic: social ecology which is time based, The Field: human ecology which is space based, and The Territory: deep ecology which is based on Arch-OS concepts. The projects conducted for each of these sections were worked on throughout the sections. This allowed us to progress through each section and use the information gained to develop and improve our projects. By looking at these areas and concepts in detail this report will allow for a development of ideas for the final project, and ultimately a final project proposal based on this.

Thursday, 5 March 2009

The Building's Dream

After using the vision system video to draw out the movement of the people through the atria, I removed the video from the animation, leaving just the movement squares and placed them on a black background to match what is needed for the GreenScreen. However, when drawing my animation I did not match to the GreenScreen resolution and therefore the animation is currently the wrong shape for it to be properly streamed through the screen. This is not particularly important though as this animation is only to give some idea of what could be produced. The result is shown below:


Although this is not currently in real-time, if it were to be developed it could be programmed to run directly off of the vision system and so could be made to be more relevant. This animation is quite a literal representation of the original data but does look quite interesting as the direct relationship with the movement in the video has been removed. This changes the viewers perspective as they do not have the video to relate to and so they are able to focus on the less obvious aspects of the video displayed in the animation. The interaction between sets of squares is particularly interesting as separate clusters merge together at times, whereas the people they resemble were unlikely to touch at all. This is similar to the concept of personal space. By using squares which extend the space taken up by the people, the personal space that they have in some way visualised. The completion of this project completes this streaming section of the module. The next step is to document and synthesise all of the experiences and information gained since this module began, which will then lead on to developing a final project based on what we have learnt.

Seeing and Dreaming

After deciding on what my idea was going to be for the streaming project I began to design and produce it. Firstly I considered what I had available to me in terms of the movement of people through the building. Although the live Arch-OS does include the vision system data, it occurred to me that this was not currently being updated, and the last updates were made in 2008. This meant that I could not use the actual data from Arch-OS at the current time as it did not resemble the movement of people through the building. However I had previously seen video footage of the vision system working and so considered using this instead. This would mean that my production would not be working live, but would give an impression of what it would look like if the vision system was working. It was also relate more closely to the personification of the building as the video is what Arch-OS(the building) sees and then 'dreams' about at night. The vision system footage looks like this:

I considered a number of different ways of manipulating this into an animation to show the movement of people through the building, but concluded that the best option was to draw upon the video and use the motion recognition system but abstract it from the video so that its purpose would be harder to recognise. This would make the animation more intriguing and interesting to watch while being simple and dynamic. I therefore began to build my animation in Flash by copying the red squares from the video and their movement through the atria. This meant that I could then remove the video itself and be left with the squares alone. By removing the relationship between the footage and the squares it makes it less obvious what the data is demonstrating and therefore makes it more abstract. To see the final animation see the post above.

Wednesday, 4 March 2009

My Streaming Project

In order to demonstrate our understanding of this third section of the module we were asked to carry out a short project relating to streaming. The aim of this project was to use some or all of the resources that have been introduced to create an interesting and relevant product. This could be a video stream, a visualisation of particular Arch-OS data, an animation for the GreenScreen, or a combination of such things depending on what we have found interesting would like to investigate further.
In terms of my streaming project I was particularly interested in designing an animation for the GreenScreen based on the Arch-OS data similar to other visualisations produced previously. I was drawn towards this concept as it allows for a wide range of different interpretations and also approaches an abstract understanding of the world around us. The idea of being able to personify a building by studying and displaying the changes within it is unusual and relatively unique in that not many locations yet have technology such as Arch-OS to carry this out. This therefore makes these particular areas more engaging and intriguing, which encouraged me to choose these specific areas to focus my project on.
In order to decide more specifically on what my project should be about I decided to make a mind map with which to investigate the possibilities:



By looking for ideas using this mind map as well as elsewhere I was able to come up with my project idea. It occurred to me during my search for an idea that the vision data system is separated into a grid which is similar to that of the GreenScreen. The vision system recognises movement using a grid and colours the grid accordingly. The similarity between the two grids led me to thinking of a way in which the movement of the people through the building could be displayed on the GreenScreen so that those outside could see what was happening on the inside. After discussing this further with my lecturer however, it occurred to me that the screen is only really functional at night at a time when there is little movement through the building. I therefore decided to develop this idea further relating to the idea of personifying the building. By showing the movement of the day through the night, this resembled the process of dreaming and therefore by carrying out my idea I could present it as a visualisation of the building 'dreaming'. Once I had this concept in place I then began to develop my visualisation.

Arch-OS

The last example of streaming that we have studied is the Arch-OS system incorporated into the Portland Square building of the Plymouth University. Arch-OS is built as an operating system for contemporary architecture which monitors the building and stores the data which can then be manipulated to provide a manifestation of the life of the building. In particular this allows the ecological aspects of the building to be studied enabling a more environmentally friendly building to be developed. By being able to access this data the building can become more dynamic as well as allowing for the production of works which are able to demonstrate the life of the building and improve awareness of occurences within it.


There are a number of different types of data that are available from the Arch-OS system. These are vision data, web traffic data, BMS(building management system) data and the network traffic data. Some of this data is complied together as part of the live data stream which updates every 5 seconds (see right). Data in the live stream includes temperature, humidity, wind-speed, electricity and CO2 levels. This is a big benefit as it means that any installments made that work from this data is relevant at the specific time at which it is viewed as it is using up-to-date data.


The Arch-OS system has already been used for many different projects. These include a number of visualisations of the data that is available (see data screensaver left), as well as uses of sounds around the atria. The Noogy project (mentioned below) was also incorporated with the Arch-OS system so that Noogy portrayed elements of the environment. For example the wind data was used to animate Noogy's hair so that it was as if his hair was blowing in the wind. This convergence of the GreenScreen and the Arch-OS system is particularly interesting, as by doing this the building was given a public personality that provided a realistic and dynamic portrayal of the environment.

Overall the Arch-OS system is a unique and intriguing source of data. By streaming the data live from the building, and then developing this into a dynamic real-time visual or audio projection a new awareness is raised of the building's hidden changes and developments, as well as raising awareness of ecological elements and encouraging changes in peoples behaviour to suit the building and the environment better. This is particularly interesting as it suggests a reversal of roles between the building and its inhabitants, in which the inhabitants are able to take a more active role in using the building and are able to see the results.

Monday, 2 March 2009

The GreenScreen

Another section covered in this section of the module to do with streaming is the Plymouth University GreenScreen. Rather unlike the name suggests this screen is actually a large LED screen displayed on the front of the Portland Square building. The screen is made out of a mesh of 50 x 80 RGB LED's that span 3 floors of the building. Although the actual mesh is built at the resolution the screen can show work in a resolution up to 330 x 500 pixels. This is a particularly low resolution which is due to the construction of the screen itself. This means the images that are streamed through the greenscreen have to be well contrasted and not very detailed as the resolution of the screen is not high enough to support complex images. A large amount of movement is also good for the images that are broadcast on the screen as movement draws the eye and therefore makes the image easier to see, as well as being more interesting. In terms of actual streaming the GreenScreen is able to recieve images that are directly streamed from a computer. As well as this the overall system can be publically interactive through the use of mobile phones and the web, and so deal with information that is being streamed to it from other locations.
An example of the work that has been previously displayed in the Plymouth GreenScreen is the award winning Noogy.org project.


The aim of the Noogy project was to show ecological and social data around the university campus through a character called Noogy. By using data from the Portland Square building in which the GreenScreen is placed, and data from the public via text messages, Noogy was able to portray information about the area through his personality. By placing this on the GreenScreen this could then be recieved by the public, making them more aware of their environment.
Overall the GreenScreen acts as a public interface by which information can be delivered. It is a unique and interesting take on the concept of Urban Screens which can provide relevant and intriguing information about its surroundings.

Video streaming

The first section covered in this streaming section of the 106 module is that relating to video streaming. In particular this covers elements of broadcasting video using the internet so that others can watch live video that you are streaming, as well as being able to watch other videos you have taken. This also includes the ability to share live videos that are being broadcast through a mobile phone as well as through an ordinary camera such as a web camera or a camcorder. To be able to broadcast a live video stream a server system is needed to transmit the video to multiple computers. There are two examples of systems that can be used to do this.

The first is the Quicktime Streaming Server (QTSS) which includes Quicktime Broadcaster. This is the Apple based streaming server which allows Apple users to stream live or pre-recorded videos which can be easily recieved by others who have access to the internet. This system allows videos to be recieved by a number of different devices, including mobiles and set top boxes and can broadcast at a high quality with high compression e.g. H.264.
The second example is very similar to this is that it is based upon the QTSS. The Darwin Streaming Server is an open source version of the QTSS which can also be run on multiple platforms rather than only on Apple computers. The basis of the product is however the same in that it enables users

to broadcast both live and pre-recorded video. By having an open source version developers are able to take the software and modify it to fit their needs. As well as these examples it is also possible to carry out streaming using your mobile. An example of a product that enables this is the Qik.com website. Once registered with the site, this system allows you to stream directly from your mobile phone camera (video) to the site where people can then watch the broadcast. This again allows both for live streaming and streaming of videos that have previously been uploaded or broadcast via the mobile. It is, however, easier to come into problems with this system than systems such as QTSS. Firstly it is not necessarily all that easy to get the right software on your mobile phone as not all phone types are listed in the inital set up. This means some users have to take more time, and possible cost to be able to use the facility. Also by relying on mobile phone cameras for the video the quality of the stream this means the actual video quality that is broadcast may not always be as good as that of normal cameras. Finally using a wireless submission system such as that of mobile phones makes the connection less direct, and therefore slower and less reliable.
The concept of live video streaming is quite interesting as there are many possible ways of using it. All sorts of data can be broadcast leading to a broad variety of possible outcomes. Possibilities include videoing your every day life, a daily video blog, or a live video tutorial. However a lot of video streaming has already been incorporated into everyday computing with help of sites such as YouTube that broadcast information in this way.