After using the vision system video to draw out the movement of the people through the atria, I removed the video from the animation, leaving just the movement squares and placed them on a black background to match what is needed for the GreenScreen. However, when drawing my animation I did not match to the GreenScreen resolution and therefore the animation is currently the wrong shape for it to be properly streamed through the screen. This is not particularly important though as this animation is only to give some idea of what could be produced. The result is shown below:
Although this is not currently in real-time, if it were to be developed it could be programmed to run directly off of the vision system and so could be made to be more relevant. This animation is quite a literal representation of the original data but does look quite interesting as the direct relationship with the movement in the video has been removed. This changes the viewers perspective as they do not have the video to relate to and so they are able to focus on the less obvious aspects of the video displayed in the animation. The interaction between sets of squares is particularly interesting as separate clusters merge together at times, whereas the people they resemble were unlikely to touch at all. This is similar to the concept of personal space. By using squares which extend the space taken up by the people, the personal space that they have in some way visualised. The completion of this project completes this streaming section of the module. The next step is to document and synthesise all of the experiences and information gained since this module began, which will then lead on to developing a final project based on what we have learnt.
No comments:
Post a Comment