Reflection 3/23
Final Demo + Conclusion
In all, I quite enjoyed working on this project and think the final showcase turned out amazing. It was awesome to see the projection working live in an exhibit setting and I'm starting to enjoy this sort of presentation style much more as you can begin to have conversations with people and let them explore things themselves instead of feeding them every part of the project before they get hands on. It was cool to see different perspectives and ideas that came from people walking by as the feedback circle has been such a small group for so long through the working process of this first project.
If I were to work on this project again, I think I would think more about the scope of it and methods to present the information earlier in the process. I spent a lot of time considering the 5 different interactions and the system as a smaller part of the whole, but that ended up taking a lot of time in the presentation to walk through, leaving many questions like how each one specifically is broken down and taking away time from show previous iterations which could have easily answered questions asked during the final presentation. For this next project I'll try to think less about every single individual interaction and think more of a system as a whole so the broad idea can be digested first, and leave specific interactions down to speculation and "further improvements".
Updating Presentation Visuals
In order to have the final visuals make more sense in the presentation, I updated each part of the journey map with a gif that would try to be understandable alone with additional text.
Stepping into room
Standing in circle for data history mode
Having a conversation
Taking out laptop
Plugging in outlet
Getting up from seat
Agency
Leaving room
Additionally, I updated the journey map + system diagram to follow more of a narrative that someone could follow beginning with discovering the system of the space, going through the different interactions, and finally leaving the space.
For the most part, the system diagram remained the same but was adjusted to replace the AR/VR components with 2 new interactions - history display and data agency toggle.
Reflection 3/17
Implementing ANOTHER Teachable Machine Model
In order to get audio to work with the WebGL scene, I trained and set up another Teachable Machine model, using the audio model this time.
The code approach was similar to the first posenet models, except the model required a global "listener". The predictions from this model were then placed in another div component as an HTML layer over the particle code.
Reflection - 3/14
Over spring break, I made a lot of good progress on creating the final prototype. Below are the updates split into key learnings:
Implementing Teachable Machine
Although my first idea was to create a custom class or js file that Teachable Machine components could be read from, I ultimately went with injecting it as an HTML element. I chose this method because of 2 main points:
As an HTML element, I could easily manipulate the display order, positioning, visibility, and opacity of Teachable Machine's components. This also means being able to include informative predictions at the top left for what each projection means ON TOP of the particle simulation that I had already been building.
By using custom javascript scripts in HTML I could still run functions in javascript and but get the benefit of creating window variables. This means that different .js and shader files can read those window uniforms as global variables and be added simply to shaders.
Custom Teachable Machine Code
Luckily, for the most part, the javascript implementation for Teachable Machine is well documented with an API linked to github. After some digging I was able to set up a HTML element code above to show some text at the top, a button to enable camera, and list of categories with their live input displayed under.
The Teachable Machine code itself is split into 3 main components: a webcam input, a prediction made by the Teachable Machine model, and a function loop that updates the predictions accordingly on every frame to the particle visual shader and behavior shader.
As I realized I needed 2 posing Teachable Machine models, I ended up having to figure out how to implement a "double prediction" using the same life webcam feed.
To do this, I split the prediction into 2 models with separate URLs to a "location" based model that tracked position (left, right, center) and a "action" based model that tracked if people were posing with a device out. Although these predictions were made using the separate models, I ended up using the same posing model from the first location based model on both of them to reduce time for each frame. These predictions were then split into 2 separate div HTML elements, and set to be window variables that could be accessed from any part of the repository using javascript.
Getting the particles to speak with Teachable Machine
In order to get the particles to respond to the window variables, I set up a custom class for the particles to inherit that would override the user given values with values from the Teachable Machine predictions.
However, I realized that this "update" was limited to the first frame of page load in, so the particles ended up storing values of 0.0 for all uniforms. In order to fix this, I wrote a loop to update the particle shaders and placed this with the Teachable Machine HTML code:
First WIP with position implementation
Here was the first WIP I recorded after getting 1 Teachable Machine model loaded. Uniforms uLeft and uRight were sent to the particle visual shader to update colors based on where I was located in the video cam feed.
Updated WIP with new HTML styling, better position implementation, and device implementation
Here is an updated WIP I recorded after getting the 2 Teachable Machine model loaded. Uniforms uLeft, uCenter, uRight, and uDevice were sent to the particle visual shader and behaviour shader to particle distortion, color, and movement.
Reflection - 3/4
During this week's class I spent my time working on the journey map, building out a presentation format, and planning for the prototype that I would need to build over break. I made a lot of good progress trying to figure out how to present the work I've been making over the past months, and am excited to see how things will end up looking in the end.
Journey map
As I missed class on Tuesday, a bulk of the time on Thursday was spent working on this part of the project. I came into class with the below presented and received some feedback on how to redesign:
Generally very wordy. Could break apart sections into more rows for what is specifically being described
Maybe instead of Thinking, feeling, doing, replace with describers more characterized for this specific project
Thinking can be replaced by single words or emoji reactions
Top labels should reflect a flow for a person going through the interaction, so items like discovery, data use, debriefing, etc
Presentation
For the presentation, I created a general outline that follows:
Concept/problem introduction
Research question + goals
Space showcase
Present journey map/flow
Systems diagram
Walk through individual parts in journey map (5 key interactions)
Demo
Conclusion
After showing a couple of people the flow for the presentation, the feedback I received for the presentation was pretty standard = make sure to go over each part in a story telling way that makes sense, possibly talk about the thinking that led up to the selection of the final idea, and update interaction slides to match the updated journey map.
Prototype
Before going into break, I finished up the planning for what I needed to build for the final presentation/showcase. I ended up selecting Teachable Machine because of it's ability to label movements like "taking device out" and knowing what sort of audio would be "conversation", as well as it natively being built in Javascript so that I could (somewhat) straightforwardly add it to the demo I was already building in WebGL.
In the end, I predict my demo to use 2 Teachable Machine models, each of which including information for:
Posing: Position in local space, device out, clapping/jumping (for data visibility)
Audio: Ambient noise, Conversation, Loud conversation, clap
Using these elements, I could take in a float value of 0-1 based on the prediction that the model predicts, and feed that into a shader (in theory) that could use those uniforms to manipulate colors, movement, opacity, etc.
So using those uniforms, I would need to build the following "animations":
Walking around left to right
Taking out device
Having a conversation
Displaying a conversation that gets too loud
Data visibility toggle
Although the interactions of plugging a device into an outlet and getting up from a seat are parts to what the final system I imagine to be is, they aren't possible to be tracked by Teachable Machine so I plan on presenting those ideas through animated gifs instead.
Reflection - 2/25
Over this week, I was able to narrow down my idea for an in class demo, clarify the final bits of interaction needed to completely my system, and received good feedback on some possible holes in interacting with it.
Feedback
At the very beginning of the week, I proposed my updates to my system along with the 2 new interactions in the space (history and data agency) which were both received well. After talking to Elizabeth I was trying to set up an ultrasonic sensor to work for a live demo, but ended up being persuaded by Dina to go with the TeachableMachine route to classify more specific interactions. Additionally, discussions made me realize how some visualizations may be difficult to understand to a visitor/student that may not understand what each data point is being collected, so I would need to think about adding some sort of label for the final prototype/visualizations.
Prototype planning
For my prototype, I propose to build a live interactive projection using WebGL to track interactions similar to the system proposed in TCS. Over the weekend, I'll try to set up a projector and laptop live input to map to use custom TeachableMachine training sets with indicators like loudness of audio, movement, and detecting taking out a device. I believe it might be ambitious to do all of these, so I may end up narrowing down to a single movement/cue.
Reflection - 2/22
During the mid-project critique, I received two main points of feedback directly from my presentation:
Make sure you know what the mites sensors can actually track. Information like where a person is physically located in the room is impossible
Fidelity of visuals may be taking away from the reason for what the interaction in the space is?
For the first point: because it’s difficult to know the strength at which mites sensor data can collect and we are not given practical data, I revised my systems diagram a bit to work with additional input to give better spatial readings to the system. Although the sensor does not directly know where things are within the space, I believe it’s safe to argue the fact that different sounds and readings can be mapped to specific parts of the map with an individually trained model for the space.
For the second point: although I expected the group of individual interactions to be a sort of indicator for people’s own behaviors, I can see Daniel’s point on the idea that it may be difficult to see what the point of each one is without specific guidance or use for the purpose of the space. Through another feedback session with Elizabeth over the weekend, I received a very good thought starter on the idea that the system is in place, and now I could expand on the personal choice part of the interaction to make the user feel like they have agency over the system’s visualizations. There were two concepts that I enjoyed:
“History” display: With a double tap on a specific part of the wall, a user could see the history of movement, interactions, and physical representations within the space. The visual playback would give the user information like the foot traffic, areas were people tend to work, where common outlets are, and how loud the space is.
Turn on/off: If users feel distracted or don’t want personal data being shown, they can double tap a table where they are working. This turns off the data being shown at the current time, but can still be seen in the history mode display to help students see the data of the space at a later time.
What are the precedents informing your design?
The established use for the public working space (focus and collaboration)
How does your system make connection to the data being collected
Collects data of what happens in the space. Can be used to display in the future as a “replay” for visitors or live to help individuals become more aware of their surroundings
How would the users understand how to interact with your system?
Independent interactions: collected and updated in real time. List of 5 interactions.
History display: Pulsing is projected from part of the wall near the Mites sensor. If the user double taps the wall, visualization will cycle through the past 3 hours of collected data.
Remember that the sensors collect data in space. Not about each occupant
Can be argued that sensor can be trained to a specific model to know where things like sounds are mapped to or additional external inputs could be added like the kinect so that it knows more about specific locations within the space. Or just update the visuals so they are less specific and instead a guess for where things are happening.
What is the value that you’re basing your concept on? Data awareness? Transparency?
Data awareness
How are you going to build concept
WebGL, Houdini
Reflection - 2/15
Addressing Feedback (2/10)
From the mid-project review shared on Thursday, I received a lot of good feedback from Elizabeth that made me notice some of the holes in what I was creating. For the most part, the overall idea was well received and I could now start thinking in-depth about how I wanted to represent things as a part of the larger system. As a summary, here were the notes condensed into a few key points:
Think about the sensors as not individual inputs but as a whole. Instead of mapping audio, think about specific interactions that can be mapped and represented in unique ways.
With projection being the main form of visualization, consider how 3D projection must be set up for protruding objects.
What defines the personality of the space and what sort of reactions would the space have?
Point 1/Point 3
On addressing the first point, I began brainstorming a list of possible interactions that could happen in a public working space like TCS. For the most part these were simple interactions but together, I believe would make the system feel very immersive:
Getting up from seat
Having a conversation with others in the space
Leaving/arriving/settling into the space
Taking out a device and using it
Changing posture
Walking in the space
Standing vs sitting
Device cord plugged in
And from here, narrowed the list down to 5 key interactions that I thought would work well together and mapped them to unique visualizations through the particle system:
1. Getting up from seat - rising water effect
2. Conversation - Particle flow noise (curl noise or vector field maybe?)
3. Taking out and using a device - Pulse inwards
4. Walking - Mousefluid distorting/displacing space
5. Device cord plugged in - Electric shock from outlet, particles magnet to table
Point 2
For the second point about thinking about projecting correctly to the different heights of objects in 2D space, I tried prototyping different methods of screen-space displacement to figure out how this could work. Elizabeth had mentioned the way this is done in Touch Designer, and so I thought about how this could be translated in another program like Cinema 4D.
UV Displacement:
This method uses the concept of geometry topology and view oriented UV projection in order to give each object in the space a unique position within a UV tile to project a video texture.
Low poly extrusion:
Using this method, I tried giving a flat UV projected plane areas of height extrusion to see if this could push the projection higher up for objects not located on the same ground plane.
Systems Diagram
Reflection - 2/3
Speak Data
First of all, I found it surprising to see that an agency like Pentagram using data as a main driver for their more visual graphic design like branding and experiences. I always thought of them as a more traditional branding team so it’s interesting to see a unique part of their team.
I think the main takeaway I got from the video was to think about data as something that can be much more personalized to a person than they think. Instead of trying to track things like number of steps, calories, etc, we can think of data as a snapshot of ourselves. It can convey emotions, how we are feeling, why we are doing things, but also at a much more macro level, why something may be in a bigger picture of things like how each individual garment in the MoMA exhibit was mapped. Data is just part of how we can create experiences, and we should be more creative with how we use it to create unique ones.
Isha’s Talk
From Isha’s talk on Tuesday, I want to shift one of my main focuses of this project to be data awareness. It’s often unclear the types of data collected from sensors and even visiting the TCS building, it was never super clear what sort of data was being collected and why it was. Of course there were explanations on the doors entering and exiting the building, but an indicator like this can be simply ignored or gotten used to moving frequently in and out of the space. If I can incorporate interactions to help passerby/students understand and react to what is being collected, I think it will make them more comfortable with the “black box” sensors written in the space. Another idea I was thinking of was agency - Isha mentioned that it may not be enough that data is blatantly explained. More important is the idea that the person having data collected from has the choice to not be collected from. I don’t think this has that big of an impact on my virtual environment because it is something more harmless and meant to help the space, but of course there may always be a person who doesn’t want to partake. I think I’ll need to think about this part a bit further.
Deciding on a Space
After visiting the TCS hall, I had 2 localized spaces in mind to build in a virtual environment.
Spot 1: Floor 3
This space on the third floor is somewhat tucked away from the rest of the floor but has close access to the kitchen and staircase so I would expect foot traffic to be mid-high. I really enjoy the long hallway space because I think it would be good for tracking movement, and the limited number of chairs/desks makes it so this space would never need to be too loud.
Pros - Plenty of hallway space for projections, Walls on opposite sides, open floor space, A location of mid-high foot traffic (getting to and from stairs)
Cons - Shape is not as open. Doorways on opposite sides, but this may not necessarily be a bad thing.
Spot 2: Floor 4
This space is right out of one of the staircases up to this floor. There is many desks, collaborative working space, and feels much more open compared to the other closed off areas.
Pros - Plenty of working desks, walls on 3 sides, place of heavy foot traffic
Cons - May be too crowded (can imagine projection mapping taking away from the working in the space), all walls occluded by something (windows, tv screen that may be in use, chairs, etc)
Ultimately, I think I'll be going with the spot on floor 3. There is much more projection/screen room, space to place speakers/audio without being too crammed, and a more fitting amount of foot traffic that could be good for visualization. This area is a bit more tucked away so I don't think there would be as many distractions compared to right out of the staircase.
Thinking about additional storyboard interactions
Besides the inputs changing the environment like time of day reflected in scenes, loud noise distorting projection, quiet noise increasing the ambient strength of the virtual environment, I created some quick sketches for some interactions that could take specifically in this space.
- speed, noise, devices out (accelerometer) affect the type of new flowers
- add to ambience of existing environment, informative of making the entire system feel whole (walls, floor projection, etc)
- inform people what type of data is being collected (time of day, noise, camera, accelerometer)
- something to get up as a break
- informs visitors about data being collected within the space
- devices out (accelerometer), localization of people within the space, noise
Reflection - 2/1
Designing the Behavior of Interactive Objects
The way the paper describes figurative speech being helpful to design the behavior of interactive objects is quite similar to what we are learning in Persuasion - they help describe and convince our thoughts on designs in a simplified way. The way they use personalities to guide the creation and improvisation of interactive objects is also interesting. It gives us a better sense of connection to the things we build, and helps people who use it also feel the same way.
Medium: Top 5 Learnings for Visualizing Data in AR
The biggest takeaway from this article for me is this - AR for visualizing data is still very much a young concept. The concept of AR is promising, but it’s much harder than expected to find uses that will outweigh the “normal” way of doing things. There have been many good examples (but also very very many bad examples) and it's important to remember the environmental context to help create a good use case. Pokemon GO I believe is a good example in that it is a friendly way to get people out in the world to play a game, with an option to turn this setting of as well. It's important that this "feature" isn't something forced and instead more of an option that players can toggle on and off depending on their unique situation (inside vs outside, public vs private setting, low light, etc).
Data Imaginaries
The two projects that this video showed were interesting in unique ways. While the one with the sphere control aimed to better visualize the data being sent and received with voice assistants, the paper roll one helped visualize how data isn’t always expected because of locality. I think both are important to think about when working on this first project, but the second is much more relatable to the prompt I’m working on. With a unique dataset for a specified location within TCS, the virtual environment I’m creating will have different effects if it were to be placed in one location compared to another.
Project Personalities
Based
on the readings, I decided on three personality traits for it: dynamic, gentle, and sensitive. My vision for this project is that the virtual environment is something that visitors and students within the space can appreciate (dynamic), but at the same time isn’t something that gets in the way of them working within the space (gentle). The environment is meant to help productivity, and anything that may be interfering such as loud noises and too many people would be reflected in the virtual scene (sensitive).
Case Study: Shinjuku Station Tokyo
This installation was recently created by Moment Factory for Tokyo’s Shinjuku station. Although it doesn’t seem directly interactive, the large horizontal screen and colors within the long hallway set a mood for the space which is very calming.
I am inspired by this project to try to create a mood within the space using visuals and additional color lights within the space. The slow but colorful visuals create a sense of relaxation which I could see my virtual environment encompassing.
Building a Prototype
Since I knew the technical implementation of this project would be the hardest part to figure out, I decided to spend a little more time on the weekend to try exploring some tools that I could use to create the virtual environment. As a first go-to, I went with exploring in WebGL.
Creating something on the web meant that a multitude of devices could connect to a test playground easily, I wouldn't have to worry about performance, and most importantly, live feedback could be used as the environments inputs. I had experience writing shaders and such for 3D on the web, and thought I would give it a try to work on something much more visually encompassing. I ended up building a test environment with flexible shaders that could change parameters like fog thickness, colors (time of day, mood, etc), movement, and distortion within the environment. Additionally, some sliders to control different inputs taken in from the TCS environment (though I couldn't manage to figure out how to hook them up to the individual shader uniforms in code).
Although I'm pretty happy with I built, I quickly found myself hitting a graphics and implementation barrier, some issues of which were:
- Unable to process live audio (unless I wanted to deal with ANOTHER custom library)
- Lack of live lighting and shading (All textures had to be pre-baked or utilize a point light that had no "smart vision" for occluding objects and shadows
- Restrictions to size and other data type implementation (couldn't use particles, larger file sizes, shared geometries)
- My lack of understanding for javascript over more friendly tools like node systems in 3D packages
I think moving forward, I will try to use a game engine sort of 3D package like Unity or Unreal Engine. I found the in class tutorial for TouchDesigner quite interesting and promising, but I think building something like this would require much more programing that a game engine could simplify.
Additionally, moving forward, I think I'll need to think about the mood of the space more when building the visuals. Although I tried to set moods with the different environment "settings" in WebGL, the technical part of setting up the shaders took away from the creative vision part of it where I had to end up changing each individual color of things due to lack of modern day game engine tricks like shared materials and colored lighting. Hopefully switching to Unreal or Unity will make this part much easier.
Reflection 2 - 1/27
Iot Data in the Home
In all, I think this study does a great job shifting thinking from IoT data in the home as a method to visualize personal data to truly building relations for people in their homes. I particularly enjoyed this quote from the reading: “HCI and design researchers have turned their attention to data as a design material when creating experiences for
connected devices in the home or home automation systems” (3). With this perspective, it personally helps this first project make much more sense in that we are building a physical conduit that can utilize the required data information for human interaction. Engagement is the way I think the future of data will be used, and finding the relevance in this studio is a good direction towards that.
Data Materiality Episode 4
It was difficult to keep up with the conversation of this podcast, but I think the thing I took away from it is that the space of creating interaction using data is still very young. Speculations such as datasets vs data setting as described in the speaker’s book. Since data is so broad, techniques discussed in the podcast such as researching another source of data helps understand what is so important about a specific dataset/what are the unique characteristics of the data being recorded. Although we can’t do the same for this first project, I think studying other related projects and trying to visualize what sort of data would come out of a space like TCS will be important to creating our own interactions.
From my feedback on 1/25’s share, I compiled 4 main points:
Ping pong ball is a good thought starter but may become a distraction with the unknown of when it will drop, it being un-predictive, etc.
What sort of interaction can be more inclusive in a shared work space? This seems more alike a solo interaction than being considerate of others in the space.
Possibly a smart idea for independent work but the purpose of shared workspaces are for the purpose of productivity in the first place. Would people even use this in a shared space?
Good part about thinking is the realm of productivity. Possibly reframe question? Consider exploring inputs and maybe even virtual environments.
From here, I reframed my research question into:
How can data be used in public working spaces to create an environment optimized for better performance and creative thinking?
More Research
Because I personally enjoy working at home alone, I realized that I lacked understanding of why people worked in shared workspaces like TCS Hall. And in order to understand the specific reasons why, I began to research one of the most popular workspaces - cafes.
Besides the typical reasons like being able to get caffeine and good Wi-Fi, I was surprised to find information about people working in cafes for the purpose of ambience. The lack of temptations, unlikeliness to be bothered, and sense from the environment including ambient sounds, smells, and sights creates a unique experience every time you visit which can increase creativity. Studies such as this one provide studied evidence of the fact that subtle sounds increase productivity in an unexpecting way compared to no noise, while too much noise may decrease productivity.
Developing Concept
With my interest in virtual environments and this newly found research, I became curious if I could create a semi-real environment within a shared workspace that could generate unique experiences of visuals and sounds in real time based on the data inputs from the mites sensors. With a virtual environment giving ambient sounds, it could increase the productivity of students within the space and give a boost of creativity.
One particular project I found interesting was this one:
This system works by taking in MIDI inputs from an AKAI machine to change an Unreal Engine scene in real time. Different elements can be manipulated to change elements like visualization, distortion, wind, sunlight direction, and colors. Although it isn't used for a real life environment, it is interesting to see what is possible with real time effects today.
Inspiration for view based visuals/audio
For 1/27’s class I brainstormed two different methods to introduce this environment - a large screen display and projection mapping. Although projection mapping throughout the room would be much more immersive, the idea of having an entire wall could also be interesting having a "window" to a virtual ambient environment.
Planning Possible inputs
How can the types of data collected in a space be better visible and be used in public working spaces to create an environment optimized for creative thinking?
Looking through the mites sensor data, I planned a series of mental "sliders" that can be used to manipulate the virtual environment. As values in code, these 0-1 floats could be mapped to characteristics such as the amounts of plants growing or the thickness of fog based on the humidity in the room.
And as a result, students/visitors of the space could interact with the virtual environment by:
Students within the space recognize the environment changes based on their actions and attempts to explore different configurations (devices, bringing friends into the space, being loud vs quiet, etc)
Movement/sitting in different locations in the room/area to change environment to new setting
3D Storyboard
Reflection 1 - 1/25
Molly Steenson on AI impacts design
It was interesting to see examples of very early systems design thinking from the past that I was unaware of. I think when people hear the terms “design” or “architecture”, they think of physical products or buildings, but the field is shifting largely into a space we can’t see. Companies like Google and Facebook are centered around systems that don’t exist in the physical space, and I can see similarities to the researchers that Steenson showed like Gordon Pask and Paul in the ways they studied how humans interact with digital computers as cybernetics. As well, all of the robotic/machine learning examples shown in the video make me excited for this upcoming semester - physical computing and design to use data for interaction certainly feels more expressive than data visualization.
Anatomy of AI
This was a very well thought out website/map that makes me both excited and fearful for the futures of AI and a shifting society based more around technology. On one hand, the idea of creating the most efficient systems for resources, labor, and data management sounds like a challenge the greatest designers can handle, but at the same time that would mean exploitations in all three categories. The “best” is almost always the cheapest and most efficient in consumer driven economy, and this website opened my eyes to what could possibly be happening behind the scenes of someone receiving their first Amazon Echo. Moving forward, I think this article is a good one to go back to for thinking about how small decision making in design can trickle down to many other changes for different people/parts of a system.
Enchanted Objects (390-412)
I quite enjoyed this article’s idea of DIY “hacking”. In a time where we are seemingly becoming much more similar to each other with automated services, algorithms that predict us, etc, building and programming things yourself to complete tasks is a unique way of expressing your personal ideas. Although automation by large companies greatly improves our way of living, it also makes us a bit oblivious to things happening behind the scenes. Continuing to learn and “hack” what we use I think is something that humans will continue doing no matter what innovation happens.
Diagram for data interaction of space
In trying to figure out what sort of interaction I want to explore for this project, I listed out some common things that take place in public studying areas:
Sounds of machines
Machines on/off
Calls/notifications
Meeting times
Group meeting
Whiteboard interactions
Taking notes
Setting up meeting
Track people ready
Consolidate exchanges happened during meeting
Exchanging notes
Eating
Managing a schedule of classes
Tracking priority work vs later down the line work
Additionally, some interactions mapped to different hardware capabilities of the Mites.io sensor:
Camera
How busy/how many people
What sort of devices are out
Acoustic/accelerometer
Are people talking/in meeting or deep work
What sort of devices are out
Thermal image
Are devices a large indicator of what's being used
Possibly tired-ness?
Possible desk object meant to sense movement within the space. Sensor tracks information about the space, relies to desk object, and sedentary people encouraged to move with specific interaction.
With these ideas written out, I became curious about the idea of using data to create a movement engage-r. A lot of data that can be collected from a study space may help show what people are doing within the space, and as a result, a good indicator of when people should get up from their seat to take a break. Although there are many digital based ones such as the Apple watch’s notification for people to stand up from their seats, it is easy to quickly become desensitized to this sort of ping. Instead, I think it would be cool to try to get people to actively get out from their seats:
Ping pong ball attached to magnetic setup. Timer starts and speeds up/slows down time based on characteristics like how focused work is, use of machines, temperature, etc.
At a certain point, the magnetism is turned off and ping pong ball drops. User is forced to get up from their seat and pick it up. Magnetism disabled until certain amount of time - forces user to stand up, think about ideas, etc.
Force you to get out of seat
Magnetics have to be picked up + attached back to system to continue using
Research Question Brainstorm
To further exploring the project, I began to focus on two different research questions:
How can we create intelligent notifications timed with focused work and leisure time?
How can data be used in public working spaces to encourage breaks, and as a result, better performance and creative thinking?
댓글