top of page

Elizabeth Han – Personal Blog

Hi, I’m Elizabeth 😊

I study Design at Carnegie Mellon, with minors in HCI and Intelligent Environments. I strive to be a designer that embeds meaningful interactions in all of our perceived “spaces.” In my projects, I try to achieve this at multiple scales, connecting people to their loved ones, their environments, and ultimately greater purposes.

Project 1

Reflection on “A Survey of Presence and Related Concepts”

After reading the paper “A Survey of Presence and Related Concepts,” I was intrigued by the dense and methodical approach to evaluating the quality of a virtual environment. My process of design is guided by user research, but I had never thought to apply such a vast and debated range of concepts, such as place illusion, plausibility illusion, coherence, immersion… and many more. In the context of our project, I wonder to what extent co-presence or social presence the experience can have, as well as what mediums can achieve the means. From a cognitive standpoint, the paper states that even a phone call can provide ample feelings of social presence despite its simple form of communication. This makes me think that social connections don’t necessarily require the “immersion” we imagine when we think of virtual reality spaces, but rather need a simple but effective method of connectivity, such as the heartbeat band we saw on the first day of class. I also became interested in the idea of transportation to bring an advanced feeling of embodiment. For example is it possible to design an experience to bring back feelings of body agency to patients who suffer from body paralysis? Furthermore, how does the feeling of “being there” differ across different accessibility levels? Lastly, I wonder how and why the current “virtual reality experience” through fully immersive headsets became predominant. Visual stimuli seem to be only one of the many parts of an immersive experience, yet designs in the consumer market seem to be dominated by the fidelity of the visual perception, rather than considering the more integral parts of what makes an appropriate experience as dissected by this paper.

Reflection on “Social immersive media: pursuing best practices for multi-user interactive camera/projector exhibits”

The paper “Social immersive media: pursuing best practices for multi-user interactive camera/projector exhibits” shares various design strategies to optimize a socially immersive media: “an interaction in a shared social space using a person’s entire body as “input device” unencumbered by electronics or props.” While E studio projects focus a lot on these full-body interactions mentioned in the paper, there have been very few tips and tricks that I could find online while working on projects. As such, I was pleasantly surprised to see a number of helpful guidelines for creating a seamless interactive experience. For the section on narrative models, it was interesting how the level of abstraction decreased throughout the four examples: experiential, performance, episodic, and game. I’m personally a fan of conceptual experiences like the experiential narrative example of Boundary Functions, as it employs very simple graphics and relies heavily on participatory conditions. (Caveat though is that maybe I’m just one of the adult audiences that try to intellectualize the experience, rather than accept it viscerally.) I wonder to what extent abstraction will play a role in our projects moving forward — how much freedom can we yield to maximize the presence of users? From a technical standpoint, I was happy to learn that even the slightest breaks in the illusion (such as, the projection abruptly changing from one episode to another) can ruin the fluidity of the experience. It makes me think that I have to test out my interactions ahead of time for the project, rather than solidify the interactions last minute, to avoid unnecessary clunks in the interactions…

Project 2

3/29/21

Reflection on “AI Impacts Design” by Molly Wright Steenson

Molly’s talk illustrates how architects in history have influenced the design of artificial intelligence as we know it today. The talk revealed to me that the core of AI is not novel at all – what makes it novel is its sophisticated execution. The idea of patterns, cybernetics, and even augmented reality were established through works that long preceded the AI, leading to the same questions that still linger in researchers’ heads today: “Technology is the answer, but what was the question?” It seems that the AI has evolved so fast to tackle so many problems, yet the placement of ethical regulations has yet to catch up. Nicholas Negorponte noted in the late 1980s of how AI is “between oppressive and exciting,” yet we still struggle to prevent AI usage from being oppressive against the human condition. I wonder how the works of humanities regarding AI ethics and the works of technology that develops the AI can be weaved together more effectively to make AI a force for good.

If the system that builds an AI is so complex, hidden, and exploitative… what will motivate consumers to counter-police the system? Materializing the hidden labor? Demonstrating how the system uses your speech as training data? Where do we draw the line between helpful technology and one that impedes on our ethics?

It’s interesting that the early development of AI was driven by military, government, and geopolitics… rather than through a commercial boom as we see it today. I wonder if the evolution of AI will be met with just as many disappointments in Boom 3, although the battle now seems to be about stunting its growth in necessary places to prevent misuse.

I think this framework is helpful in building our project 2, as the six categories seem to define what an intelligent environment should be capable of. I’m particularly interested in ‘objects with digital shadows’ and ‘objects that subvert,’ as a good design should be able to do both in order to make our lives easier… without being invasive. A design that offers a way in, as well as a clear way out, seems to be the most ethical approach towards intelligence.

Initial Ideas

Research question:

  1. How might we make intelligence systems more transparent? (What does it know about us? What can it do about our lives? How can we hide from its omnipresence? What can we control? Who is contributing to the labor? How is the data collected? How is the data interpreted?)

  2. Augmenting our environments with our data?

Ideas

  1. physicalizing/augmenting data collection process

  2. the more you walk, the more you calibrate data about your environment

  3. labor and ai

  4. making labor more transparent

  5. ex) connecting you to mechanical turk workers when you talk with an intelligent agent

  6. representation: how can missing datasets be filled in?

  7. who are you in the overall dataset?

  8. ex) asian american, female, pittsburgh…

  9. how are you being ~oppressed~?

  10. ai shows the decision it makes, based on what it knows about you.

  11. because you don’t fit in with the majority, your decision is being dictated by something else…

  12. intelligence embedded interaction: using our phones as controllers of the world — point to things and get data/informationthis is a tree, this is a mountain

  13. role of ai in our social relationships (extension from project 1)…

  14. calibrating the agent based on how you want it to react, with various personalities

Conversnitch

2013-2014 with Brian House An eavesdropping lamp/lightbulb that livetweets conversations, using a small microphone with a Raspberry Pi that records audio snippets and uploads them to Mechanical Turk for automated transcription.



WorldGaze


ListenLearner


HYPER-REALITY


3/31/21

Reflection on ‘Designing the Behavior of Interactive Objects’

The framework of personalities to dictate a design was introduced through this reading. I think this is pretty future-thinking, as more and more AI driven services will become anthropomorphized to reflect the human-like actions it can take. I think the “aesthetic” approach as the author describes it will eventually be considered a functional one, as thinking through the personality is crucial in many settings, such as healthcare. I wonder what types of attributes can be manipulated when the final product is not robotic. Speech? Animation? Etc?

Reflection on ‘IoT Data in the Home: Observing Entanglements and Drawing New Encounters’

This was an interesting insight into how people want their IoT data to be experienced in the home. It reminded me of my sophomore year project, where I wanted to delve into visualizing energy use to promote sustainable behaviors… yet it was really hard to make data feel approachable in human terms. Based on the paper, it seems like there was an overwhelming consensus among participants that they don’t trust their IoT Devices, yet looking at hard data feels inhumane and unhomely. Even data representation need to be injected wisely into a person’s mental model/day, so that it can be easily digested. The use of tarot cards and epic were particularly interesting, as the strategies aim to provide imaginative narratives through data, rather than an endless list of graphs (which nobody wants). It revealed to me how storytelling is impactful when dealing with dry data…

It also raised the question of: who controls the data? One participant noted how they felt too powerful for being able to manipulate their data, as if they can change the course of history… How should designers balance user autonomy, integrity, as well as privacy? In what ways can a flexible system be prone to misuse?

Case Study: ‘Better Human’ by Yedan Qian



“Better Human is a speculative design imagining a world where native technologies are being widely used to manipulate people’s sensation and mind for wellbeing purpose. It raises the question of whether human should rely on technology to gain self awareness and self control. Should we give up control and accept human-augmentation for good reasons?”

The system works in three parts: reflector, troublemaker, and satisfier. (As it functions as a design fiction, it gives a picture of how the system would work, rather than actual explain the mechanics behind it.) The user engages with the design through physical objects/representations, which again, serve more as design metaphors than actual designs.

Overall, the project serves as a provocation of how much control we want our devices to have over us. I’m inspired by its provocative element, use of narrative, and physical prototyping to explain the idea behind it.

Final Research Question

How do humans gain control in a world dominated by AI?

When thinking of AI’s integration into our society, we imagine a world where decisions are dictated by machines, filled with loopholes and inhumane decisions. Such technology has already seeped into our lives in the present-day, where facial recognition promotes racial profiling, and machine learning algorithms biases against people of color. This speculative project explores how humans can gain control in a world where AI dominates decisions in our everyday lives. (Questions of transparency in decision making, filtered understanding of the world, and how people will choose to react)

1. Choosing your options

2. Choosing your transparency

3. Understanding your data (?)

4/4/21

Reflection on ‘Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design’

This paper articulated the difficulty behind designing with AI. I could fully relate to the challenge of injecting established UX design strategies into an AI product, as it is often unclear what it can be capable of (capability uncertainty) and what type of results it is actually able to give (output complexity). In light of my first project, even though I have created scenarios in which a conversational AI can mediate two people’s conversations… it is still unclear to me what type of services the AI would be able to offer, as well as how it would respond to a lot of false negatives/false positives that are inherent in an AI driven system.

The framework below seems helpful in understanding at large the types of AI systems designers would have to build for. The key seems to be that the designer can narrow their problem space by defining the types of interactions that are “probable”, as well as define the way the system would evolve/adapt with the user over time.

(Sidenote: the paper points out how rule based approaches are limited in capturing the essence of AI’s constantly evolving nature, but it is a quick and dirty way to prototype, especially for level 1/2 systems. From personal experience, I think designers suffer from a cognitive dissonance, where they know that the AI should be capable of so many things, but they are grasping for straws by solidifying the most basic of interactions through a rule based approach… simply because that is the most accessible way to design in the present day. I wonder how a design tool can respond to the generative nature of AI, so that designers can better expand their imagination of AI systems.

Applying the findings from the paper into my project… I think I’m interested in exploring the tangible ways that an AI system can present its dynamic, constantly evolving state. Software is quick to adapt to new changes, but can hardware be appropriately designed so that it can also be generative? Can a hardware be an AI-hardware, without needing to be replaced constantly over time, or being so reductionist that it is simply in the shape of a cylinder (aka Amazon Alexa)?

(^That is only one of many goals for the project though… since I’ve had a lot of stray thoughts of what a generative, kinetic sculpture could look like recently.)

Sketches, Thoughts, Research

Sketches on intended qualities, conversations with a persuasive AI, user journey, and potential physical prototypes





Note: Pictionary is interesting to me… maybe more viable for another project though

  1. Findings: Implicit/explicit nudges, autonomy over controls, transparency are all needed…

  2. Trying to focus on customizable controls on an AI so that the creepy, surveillance capitalism-ness of current AI can be removed.

  3. Can AI be under human control? Can you blind the AI? Can you learn more about what it’s thinking?

  4. Note: I am focusing on a more anthropomorphic representation of AI, rather than AI as an embedded technology in designs. As a result, a lot of scenarios I have derived lean more towards the ~assistive AI~ that we have in the present day…

**Priority 1 is transparency and control. You choose whether your AI learns more or less about you.

**Priority 2 is fluidity of conversation. (Nod to cybernetics) There are many ~human~ ways that an AI can persuade you, or show its transparency. Emotions, gestures, multimodal things all play into the way it shares how much it knows about you. (Implicit nudges) (ex: “Ah man… fine. I’ll spit it out. I know that you’ve been looking at Emma’s profile picture for 45 minutes today! Want me to stop snooping?)

User Journey Currently:

(Level of engagement)

  1. Look at AI –> (level 1 engagement)

  2. Approach AI –> (level 2 engagement)

  3. Touch AI –> (level 3 engagement)

  4. ^Transparency increases throughout. Nod to Google Soli sensors

Readings

  1. ml models cannot be understood… decisions are not clear (if made on neural net)

  2. training data bias, method of data selection, and bias in the algorithm itself are not made clear

  3. no “bugs” from incorrect classification/regression – just bad data, or algorithm issues

  4. path to transparent ai, one solution: google model cards – https://modelcards.withgoogle.com/about

Counter: how ai transparency can be counterintuitive in behavior..

  1. https://arxiv.org/abs/1802.07810 – when ai decisions are made transparent, people were more likely to miss the errors due to information overload.

  2. automation bias – people trust the ai’s decision, because it isn’t explained.

  3. one way to resolve this conflict between transparency and usability is, put explanations in human language, rather than complicated machine language (a.k.a. design for the explanation behind a machine decision, rather than just the decision alone) (note: this could even be imagined where the signals are made through implicit nudges, like your bedframe flashing lights to encourage you to get some sleep)

  4. in conclusion: ai transparency needs to be better designed to balance ethics and usability altogether

the transparency paradox: current efforts toward ai transparency

  1. transparency paradox– algorithms more at risk for malicious efforts when ai becomes more transparent…

  2. Lime AI – explaining predictions to any classifier

Outlining/defining ‘transparency

  1. helpful guidelines… transparency not only to the user, but also to society at large (ex: transparency in employment changes due to ai)

Overall question then becomes: When is AI transparency necessary, and how can it be humanly explained?

When AI transparency is needed: Storyboards

Case Study: Surface Matters



This project is interesting for its unique use of materials to evoke a new experience. I want to delve into a novel form of tangible interaction in a similar way, where the interaction feels more human and welcoming.

methods of prototyping

  1. wizard of oz- how does the ai feel, tangible interactions

  2. prototyping with ai classifiers – lobe.ai, teachable machine

  3. DtR – different behaviors/physical forms. which ai feels most trustworthy? when do we cross the threshold of creepy vs. helpful?

  4. what is the best way to test the potential of an ai service…?

4/5: Physical Prototype 1

I made paper prototypes to illustrate the idea of transparency.




Initial iterations

State 0: Concealed

State 1: Present/Alive/Nudge to ‘openness’ and ‘transparency’???

Some version of linear actuator would be inside to push the scales outwards.

I like how the scales make the ‘device’ feel more alive. It’s almost like a living creature, expanding its ‘furs’ to speak with the user. The prototype inspires more use cases to consider. What does this novel interface allow?

State 2: Most Transparent?

State 3: “Looking around”

Inspirations for form

Small, Medium, and Large — Scale in the Work of Heatherwick Studio | by  Karissa Ramsawak | Medium

UK pavilion – Heatherwick

Julie and julia GIF on GIFER - by Bludcaster

Julie and Julia

inFORM – Tangible Media Group

Totally Natural Pinecone - Small Pet Select U.S.

pinecone! (bio)

Printed paper actuator – Morphing Matters Lab

Potential inner workings

Feedback from Dina:

  1. Try string mechanism, rather than linear servo

  2. Find new case study that is more similar to my project (using kinetic movement?)

  3. Reasoning for current form? Needs justification

  4. The change in form can be layered — transparency of material, changing according to sunlight, etc.

4/7/21

Case Study: Water-reacting architectural surface

The project below is more similar to my project in its execution. Mine is more based on kinetic movement, rather than using a novel material, but I am inspired by their use of geometric patterns to further emphasize the notion of a “reveal.”



Other notable projects for servo mechanism (thanks to Dina):

I wasn’t able to find too much time to work on the prototype between Tuesday and Thursday, but I was able to implement this small change below.

The leaves can now open up when the string is pulled down. Time to try this out with a servo.

thought: maybe something appears underneath the petal when opened up? haha

4/8/21

Feedback from Dina:

  1. What is the interaction? (ex: ask to play music from spotify, ai shows that spotify already retrieved your personal data, when i’m having a conversation with someone else, ai reveals that it’s recording) — speculative appliance

  2. what triggers the device to be transparent?

  3. String pulling on paper has issues of getting stuck –> dina: previous project used straws/3d model to smooth and create a channel for the moving string

  4. portions of the ai opening is cool — two ways of execution: 1. multiple servos. 2. paper actuators

potential materials for paper actuator

example projects:


Excerpt from Active Matter by Skylar Tibbits



4/13 Reflection on Midpoint Presentation Feedback (Daragh and Brett)

Notes

Brett:

  1. coldplay example – i wish it wasn’t so obvious

  2. transparency use case – i wish was deeper or darker (needs to be more interesting)

  3. i love the interaction of breathing and opening up

  4. how much it breathes and opens up correlates to what the content is going to be

  5. correlate to different scenarios

Daragh:

  1. like it, plummage,

  2. https://floower.io

  3. needs more richness on the information that’s being revealed

  4. the object itself – what do you see underneath, what’s being invited in the view of what’s under the hood?

  5. revealing the electronics?

  6. makes me think, interaction that you can capitalize on

  7. old work in public displays – interactive tvs – looks at different states of activation. far away: ambient. as you move through, there are four stages of engagement

  8. opening thing up, and inviting someone to attend to it, engage with it.

  9. move closer and invite to

  10. progressive disclosure – glance, move in more and more

  11. nice interactions around proximity, engagement, evolving the scenarios

Brett:

  1. the paper petals – you could talk to mark to make it fit in a better way

  2. inner skin that’s light or sound

  3. as you got closer to it, maybe it tries not to share as much

Daragh:

  1. echromic glass?

  2. electric glass could be interesting aesthetic

  3. if individual petal can’t be actuated

  4. https://www.ecloudproject.com/

  5. http://jakobmarsico.com/view-abstraction

Reflection

Based on the midpoint presentation feedback, I want to focus on the following things:

  1. Prototype layers of interaction (distance correlates to openness, light emitting from within)

  2. Develop better scenarios related to AI transparency (“deep and dark,” rich information)

  3. Consider what will be shown underneath –> research different mechanisms

  4. Finalize prototype on 3d modeling software

4/16 Feedback from Shannon (Products POV)

I reached out to my friend Shannon in the P track to get some feedback on the form, as well as ask for advice for fabrication. Here’s what she said below:

  1. petals are not subtle, personally feels a bit creepy

  2. solid casing that encases the whole thing…

  3. smaller flaps that attach to the solid casing…

  4. so that flaps aren’t overlapping each other…

  5. reminds me a bit of an armadillo

  6. there’s no prescribed shape to it… looks like an egg

  7. you can reference alexa

  8. petals are too big, takes away from the form

  9. revealing more of the form adn minimizing the petals would be better.

  10. the movements would also be less obvious

  11. if it’s stationary, any movement will be obvious

  12. petals should be minimized, visually less in your face

  13. Suggest: model various shapes of forms, through foam modeling

  14. model different sizes/blocks of shapes…

  15. ask for people’s opinions on what feels natural to them…

  16. more sketching of form…

  17. in consideration of the form, you have to consider how it will be made…

  18. wooden frame…

  19. https://www.mcmaster.com/

Nitinol/Flexinol Wire Research

Tackling goal 3:

The current mechanism with the servo is a bit tacky, especially if I want to show what’s underneath the petals. I went back to looking at muscle wire (Nitinol/Flexinol wire) to see if it would be implementable with paper. It turned out that there were a lot of projects to learn/gain inspiration from.





I hesitated for a while about whether to buy the wire or not… I didn’t want to completely change the course of my project at this point in the process. Ultimately I ordered off Amazon, so that I can at least try out the mechanism to see if it would fit.

Detailed next steps

STRUCTURE MATERIAL

  1. makerspace (MAKER NEXUS)- $150 for a month

  2. currently paying $220 for technology…

  3. if not, ill just cut out the parts myself on the thick craft paper i currently have

TO FABRICATE

  1. laser cut structure

  2. 3d print servo holder?

  3. cut petals

  4. circuit for controlling wires (how?)

  5. mapping 360 sensor to petals?

PETALS MATERIAL

  1. settled on using paper — i want it to be somewhat malleable

  2. sound of paper brushing up against each other is more pleasing

LIGHT

  1. light inside (smart lamp + speaker?)

  2. revolves depending on where you are

WHAT’S INSIDE?

  1. soft light (speaker/fabric material)

ADDED LAYER OF INTERACTION

  1. swipe to clean

  2. swipe to hear more options (multiple options —> embedded in the interaction)

4/20

Dina Feedback

  1. Test out one petal for nitinol

  2. 3d print base, edge of petal

  3. scale – smaller scale? what’s the interaction? what’s underneath?

  4. what’s inside? (ex: open up when having convo with boyfriend –> shows that ai is listening. option to closer for important convo)

  5. define transparency, type of data

  6. doesn’t have to fix everything, just one specific probleem

  7. saving convo and sending to cloud is… not a conspiracy theory

  8. Look to this article: https://www.wired.com/insights/2015/03/internet-things-data-go/?redirectURL=https%3A%2F%2Fwww.wired.com%2Finsights%2F2015%2F03%2Finternet-things-data-go%2F

Nitinol Experiments

This was the most successful attempt. Adjustments needed to be made in the way the nitinol wire is heated (torch is most effective, with wires coiled on a steel rod), as well as the amount of voltage to actuate it (4.5v was most effective for the ~5 inch wire I cut out. 3v was too slow, and 12v seems to be too heavy.)

There is a way to calculate how much current should go through the wire to actuate it, but I haven’t used it yet. It’s available on the manufacturer’s website: https://www.kelloggsresearchlabs.com/nitinol-faq/ (question 1.6)

Actuation sped up two times

The main problem with the above experiment was that the wire fails to bounce back successfully once the current is no longer flowing. Watching the following video made me realize that an opposing force needs to be applied so that the wire can straighten itself out more quickly:


Another issue was the weight of the alligator clips. It seems to hinder the performance of how much the wire bends. As such, I decided to use aluminum tape I (fortunately) had around at home. (No copper tape was available but it does the job well.)

Since multiple petals need to be actuated, I looked into how I can control each output effectively on Arduino. A shift register like the one below seems to be a good solution to acquire as many digital outputs.



Shaping the wire. The shape doesn’t quite stay in place, even if secured with strong tape

To execute the “opening up petals,” it took many tries to get the right movement… I realized through the process that aluminum tape failed to conduct any electricity.

Successful attempts:



Note: Apple made an update in their 4/20 event of protecting your privacy when interacting with apps… I wonder how they landed on the design decision to show different types of data that are being tracked. How do people decide what is too much intrusion and what is not?

Makerspace visit – got lasercut training for 2 hours

4/22 – 4/26: Snapshots of progress

Final petal opening test on old model. Stabilized by holding down wire with tape on structure

better movement.

Lasercutting diagrams



Head wrecking process… on Illustrator

Lasercutting on 45w Helix. Used a pulp board i had at home

First draft. Weak joints, unstable petal holder. Needs refinement.

4.5w burnt the wire.

Soldering

Petal open

Nice open. Would be better if opens more.

Counter-force to pull petal back down???

Magnet to snap the petal back in place?

Thought of slanting the joints, but abandoned

Tried spring wire, based on inchworm video. Did not work

Hair tie (rubber band) attempt. Also did not work

What to do, what to do…

4/27 Feedback from Daniel + Dina

Daniel:

  1. types of data could correlate to certain colors

  2. the object could be beautiful as a tangible object

  3. petals could be made of fabric?

  4. mylar paper…

  5. fabric: below,

  6. leds?

  7. pet – opens, and below are different colors with fabric, a whole map of different data types…

  8. when skin opens, reveals the colors

  9. combination of different data types

  10. not a piece of technology

  11. a plushy toy…

  12. wire – need to use a transistor…

  13. arduino

  14. voltage supplier is providing a lot of current

  15. fabric: if painted with chromatic paint

  16. heat could change the color…

  17. arduino can only provide 130 miligrams

  18. transistor will work as a current

  19. current will go to the power supply rather tahn to the arduino

  20. one or two petals prototyped is good, sketches are beautiful

  21. 3d model the petals

  22. i wonder how are you thinking about tapping into the data?

  23. may need to do more research, show a diagram… how will the whole system work?

  24. facebook: ads, yesterday you were looking at this

  25. explore what will be the way it integrates with social media

  26. data categories – make them up by yourself, according to your experience

  27. white or black skin,

  28. below the skin, all the colors pop up inside

  29. the color = data typ

Dina:

  1. one part could be moveable, but attach the petals for everything…

  2. show human figures in systems diagram

  3. fabric or mylar

  4. thermochromatic pigment

  5. check felt project from before: http://www.feleciadavistudio.com/

Reflection on feedback

  1. Will try to categorize by personal data on each leaf node… Need to decide how broad each category is, as well as how to show/map it. (Maybe an accompanying screen to detail the findings? like a 3d rendering of a human body with explanations attached?)

  2. Want to try the thermochromatic pigment, but rather than just showing differentiation through color, I might try creating different “states” through differing movements of the petals

  3. Transistor suggestion is helpful – will try out with arduino

  4. Tapping into the data on the backend? API? Might check out. Seems like a detailed design step….

  5. Prototyping just some petal movement is good, but need to fabricate all as suggested by dina

4/28 Better inner structure

  1. Deeper joints for stable structure

  2. Calculated width and depth of petal holders – needed to be “sliced” depending on diameter of sphere. Manual work on Illustrator… but ended up working out pretty great.

  3. Ended up not needing the inner vertical poles to secure the petal holders. Interesting that new, deep joints were able to secure the structure very well.

4/29 Next steps

Fabrication next steps:

Old notes on transistors from physcomp..

Interaction next steps:

  1. types of data to show through petals (color + movement)

  2. types of inputs (gestures, speech)

  3. storyboard

Found a super helpful page: https://www.kobakant.at/DIY/?p=7981.

Nitinol wire use cases below:



Kommentare


bottom of page