top of page

I h8 u Skynet: Designing Intelligence in Environments

Design Studio – Spring Semester – Project 02

03/26 AI Impacts Design & Architecture – Molly Wright Steenson

The quote that Molly used in her presentation – “Technology is the answer. But what was the question?” is particularly interesting to me. I think I might be a bit too dense to really dig into it, but to me it really speaks to finding purpose in technology, rather than creating/using technology for the sake of betterment of…technology. A lot of the time when I’m showed ‘technology-heavy’ projects, meaning projects that are less about designing for a specific purpose and more about pushing the boundaries of ‘what’s possible’ within a specific technological field, I’m left pretty unimpressed. A good example would be demonstrations of VR capabilities with different headsets (Hololens, Oculus, etc.), where people who are engaged and invested in that particular ‘brand’ of technology become very excited by the ‘possibilities’ of a new feature or ability of VR. When I ask what those possibilities are, I’m usually honestly given a very vague, half-mumbled answer or I’m given clear examples of contexts in which it could be useful but nothing more than that. There is a bit of paradox here to me – we’re constantly pushing our technological boundaries and capabilities forwards without as much notion as to what we will use them for, but it’s difficult to imagine what their purposes could be without having created them yet.

Readings 01

Anatomy of AI – Well that was quite a lot. I thought the idea of the Echo’s sheer ease of usability, almost to the point of laziness, being in such sharp contrast to the amount of resources and human spirit (kinda dramatic but I couldn’t of anything else that encompassed the history and scale behind Alexa’s production) was really powerful. We give this colossal exertion of our civilization towards creating as easy and lackadaisical as an experience as possible. At such a high, monstrous cost it’s kinda…disappointing? That the best we could give to ourselves in return for that price is simple tasks like turning off the light or ordering food being made even simpler. Not even necessarily better, just simpler. Man I sound like Jonathan, its a bit cynical innit? Of course, the really dark part of it all is that “we” are not paying the price. The overwhelming majority of the people who get these Even Easier Tasks are not the ones paying for them. We’re not doing the oppressive labor or watching our land get ripped apart for resources but we don’t even have to flick our light switches up or down anymore.

It kinda feels like we gave up our morality and humanity as consumers for the payoff of this:

Again, I know, cynical and a bit dramatic right? But it makes me not really want to make any of the fun, super experiential things I was gonna make for an AI product. Hypothetically, nothing I could make could outweigh the cost of its production and continuous operation. I guess I feel like I should make something that begins to equalize, address, or at the depressingly bare minimum, just acknowledge the cost.

So that’s how I feel after this one.

Broken Promises & Empty Threats: The Evolution of AI in the USA, 1956-1996

I thought it was interesting how much baggage AI comes with, just from a historical standpoint. Even the scientific community has had years of divide over it’s potential, both positive and negative. Almost all of my emotional understanding of and attitude towards AI comes from Science Fiction material, where it feels like there is a excitement-fear relationship with the concept of it. We’re obsessed with it, making promises and threats about how soon it could come and how much it’ll change the world when it finally does. Plenty of Sci-Fi material has been made from that feeling of dread associated with AI, all the horrible Terminator-esque scenarios that could come to pass as a result of Artificial Intelligence surpassing humanity. So it’s ironic and hilarious to me that all these fears have not come to pass as a direct result of humanity’s inability to make something capable of surpassing ourselves…yet.

Enchanted Objects: Design, Human Desire, and the Internet of Things

“The fantasy is common in fairy tales: the woods come alive—every plant and animal and rock and river—to attend to your needs.”

Painting it like this is really exciting, I imagine each of these smart objects with their own animated character, Mark Hamill-Voice Actor, and backstory, all within the context of my own life and space. That’s my first reaction to it. My next reaction is a little less enthusiastic, especially about that ending – “attend to your needs”. I feel like I’ve been trained at this point to be hesitant around design philosophy that preaches ease of living and endless magical opportunities, as they often boil down to a more ‘magical’, tech-aided version of everyday tasks. Is being able to make sure your house isn’t flooding without lifting a finger (as expressed for the Twine design) really magic? Technologically-speaking, for sure. But for a personal experience, it’s pretty underwhelming for me especially when compared to the vintage Disney world described here, one that’s brimming with character and story rather than just ease or luxury. A lot of these sort of characters also have a degree of autonomy but within their own narrative/character attributes, this is what makes them interesting. I think place all of that independence and learning solely in the context and service of a person removes a lot of the excitement and novelty associated with characters like that.

“This points to the possibility of our realizing another persistent fantasy: to make our own magic.”

^This is a much more magical interpretation to me. Being able to modify your space, senses, and objects as a way of furthering your ability to create identity for yourself and connect with others feels much more limitless. While character and story are not necessarily presented by an experience here either, the opportunity to create your own is just as magical.

“Hacking the physical world will be as common as creating our own snap-together LEGO constructions. Not only geeks will combine and recombine data from various apps at their whim.”

^The idea of my conservative grandparents in rural PA constructing their own spaces or experiences is really funny to me, and feels a lot more approachable and accessible than many of the other technology they have attempted to adopt or use. Using something as familiar and intuitive as building blocks, with tons of enjoyable physical cues as to how to make something ‘work’, makes it easier for me to imagine any user in any context creating for themselves.

“The watch, like the smartphone, offers a blank slate for inventive makers. Like chefs with a cabinet of spices, we will all cook up enchanting works to suit our desires. We will be come the magicians—because everything from shapes to behaviors will become hackable.”

“Once again, we are seeking to enrich life by subtracting elements from it.”

03/26 Possible Research Questions

  1. How might we reveal to the user the complex resource and labor system within their simple interactions with intelligent home devices?

  2. Intense contrast between AI product’s ‘limitless’ digital cloud and lifespan of the physical designed body

  3. Subvert the AI role of the ‘Assistant’

  4. How might we create the presence of intelligent characters/narratives in the home?

  5. Create more exciting, mysterious, and impactful experiences to interact with in an intelligent space

  6. Explore AI roles outside of ‘assistant’

  7. Explore the experiences of user-creation of intelligent space/characters versus autonomous intelligent space/characters

  8. How might we make customization/creation of IoT/smart homes more accessible and understandable for non tech-savvy people? In other words, how might we make an intelligent space more ‘seamful’ for people unfamiliar/uncomfortable with IoT?

  9. “Mark Weiser proposed decades ago that designers should not focus on designing systems that are seamless, but rather systems with beautiful seams”

03/28 Final Research Question

How might we reveal to the user the complex resource and labor systems within their simple interactions with intelligent home devices?

  1. Emphasize disparity between an intelligent product’s ‘limitless’ digital cloud and the lifespan of the materials in the physical designed body

  2. Convey complexity and weight of systems needed for a single interaction in contrast with the simplicity and ease of the interaction

  3. Subvert or transform the AI role of the ‘Assistant’

03/28 Project Goal

Design an intelligent home experience that is more complete, meaningful, and enjoyable for the user while being more ethical and sustainable for the system behind it.

  1. Ideally, the design would have a preferred experience for the consumer, be a more profitable product for the brand, and require ethically improved or more sustainable production.

  2. These sub-goals can run parallel to each other but ideally would be cause-and-effect, such as leveraging being able to see or impact the system (Actionable! Not just awareness.) to create a more meaningful, complete, and profitable product.

  3. User Goal: “I want to purchase and use this product because it’s such a wonderful experience”, not “I want to purchase and use this product because it is more ethical and sustainable”

03/28,29,31 Project Goal + Analyzing Case Study: ‘Project Alias’

How might we leverage the ‘reveal’ of the current complex system behind intelligent home devices to design a more meaningful and preferred experience that in turn creates a more ethical and sustainable system?

For example, Project Alias could be seen as a more experientially rich product for the user than the base Amazon Echo or Google Home as a result of directly addressing one of the aspects of the larger system around the device (the surveillance) within the designed experience.

By having a product that comments on and impacts the system around the device, both in its aesthetics and interactions (both very direct, noticeable for the user), Alias creates a richer, potentially preferred experience for the user. This in turn further affects the original system it was commenting on, by providing a new, alternative experience to the user that is both richer and more ethical than the original. Amazon/Google can be pushed to sell a more ethical product because it is richer and preferred for the user, and therefore potentially more profitable.

In other words, it creates experience from commentary, which then creates impact from the introduction of the new experience into the market.

It’s also very worth noting that Alias’s commentary and impact on Google/Amazon’s surveillance system is made partially through user actions – the user first places the Alias over the smart device (effectively “muffling” it through a physical act) and then changes the ‘name’ of the device from the default ‘Google’ or ‘Alexa’ (taking control/ownership over the device + it’s identity, and the surveillance of their own home). They then reinforce this over time, as every time they interact with the Alias they call it by their own chosen name, until it becomes theirs rather than Amazon’s or Google’s. This makes for a way richer, user-driven experience.

Also worth thinking about, once the owner has installed the Alias, they are then making an continuous choice everyday that they keep the Alias on to keep their interactions private. This is good if you want to keep this a one-time experience at the start, and then an extremely passive almost invisible/unconscious experience for the rest of the lifecycle (with the notable exception of the Alias’s very direct visual form, which serves as a constant reminder of the continuous choice). A different approach that could be more experientially (or directly) rich would be allowing the user to use Alias to choose what is being surveyed –> Certain behaviors, interactions/voice prompts, certain areas of the home, specific people in the home, at specific times of day, certain parts of their profile like shopping but not others –> and make this into a tangible interaction with Alias.

How does Project Alias work?

‘Alias’ is described as a ‘middle-man’ between the smart speaker and the user. It does this by being connected to the speaker’s microphone and emitting a constant low ‘muffling’ noise that renders any conversations that the speaker might be listening to inaudible. When the user says their Alias wake-word, it stops the ‘muffling’ sound and quietly repeats a recording of the original wake-word (such as ‘Alexa’) to activate the smart speaker.

Note: Alias utilizes the existing interactions of the original smart speaker and then builds off of and subverts those to create it’s own new set of interactions.

The Alias app is a simple controller that allows the user to reset, turn on/off, and train the Alias. This opens new custom functionalities that the original smart speaker did not allow, as the user can program/train the Alias to send any chosen speech commands to the speaker.

There is a small neural network that runs locally on Alias (not sure how this works), which can be trained to accept these new speech commands.

^ The Alias consists of the 3D printed shell, which holds a Mic Array, a Raspberry Pi, and 2 speakers (one for emitting the low muffling noise, the other for the sound recording of the original wake-word). The shell then envelops the top of the smart speaker (visually muffling the speaker) and connects to the mic of the speaker (technically muffling it).

Questions: How does the neural network within the Alias work? How does it allow for training of new wake-words? Could the Alias be trained to do new functionalities through new speech commands outside of wake-words (it suggests that it could but not sure how this would work exactly..?)

03/28 Case Studies & References

  1. “Bots – Collaborative AI for the Smart Home” by Kevin Gaunt

  2. “Project Alias” by Bjorn Karmann and Tore Knudsen

  3. “Objectifier” by Bjorn Karmann

  4. “Privacy Lamp” by Tore Knudsen

1. “Anatomy of an AI System” by Kate Crawford and Vladan Joler

2. “Emotionally Durable Design” by Jonathan Chapman

3. “IoT Data in the Home: Observing Entanglements and Drawing New Encounters” by Audrey Desjardins

4. “Things of the Internet (ToI): Physicalization of Notification” by Eiman Kanjo and Kieran Woodward

5. “Systems to Design a Smart and Contactless Home” by Lilly Can

03/29 Intelligence in Environments Sketch

03/30 Reading 02

“Designing the Behavior of Interactive Objects”

‘Personality’ Design Method:

  1. Initial interacting with the base object to understand current assumed interactions with it

  2. ??? Brainstorm possible behaviors and metaphors of the object based on current stereotypes of the object’s personality (what does it mean by creating metaphors and what does it mean by personalities? –> Ok after reading the paper fully, I get the personalities but am a bit confused still by the metaphors)

  3. Iterating over several sessions interacting with the object to develop interaction scenarios and behaviors

  4. Final description of developed behaviors for object

I really like the idea of using interacting with an already created object as a way of conceptualizing new interactions. It’s like real time testing of the flaws and strengths of the object while also exploring potential new interactions in a really direct way, nowhere in storyboarding, sketching, or early physical prototyping can you fill the role of the user as directly as with the original object. Its also such a physical way of conceptualizing which means you can directly experience where the holes in the experience might be, what physical form aspects would need to be added, and the emotions associated with different parts of the interaction.

Making the Personalities a bit more specific, almost characters even, seemed really helpful to me. The ‘Big Boss’, ‘Loving Parent’ had more of an impact on me in terms of how their interactions/movements with the user related to the stereotype of their character, versus something like the ‘Risk Taker’ which felt too general to associate with a specific character from one’s life. Whereas the ‘Loving Parent’ is definitely something I’ve experienced and have very specific memories of, so seeing the couch mimic those movements was much more impactful and meaningful to me.

03/31 Narrowing Concept

How might we reimagine interaction with smart home speakers for ‘Wicked System’ commentary and impact that produces a richer alternative user experience?

Design a product ‘add-on’ to Amazon Echo / Google Home that alters its user interactions to create ‘Wicked System’ commentary and impact that produces a richer alternative user experience.

04/01 Echo Personality

04/01 Narrowing Focus More

‘Wicked System’ Focus: Abandoned/Disposed Devices, Sustainable/Ethical Materials, Digital Waste

Project Concept:

Leverage owning Smart Assistants for long periods of time and across multiple product generations to create a rich user experience that encourages keeping individual devices when moving from product to product. The ‘Character’ of the Assistant evolves over time even as its functionality/performance may decrease. This will be reflected in continuous changes in both the form and interactions/behaviors of the Smart Assistant.

  1. Smart Assistant ‘Aging’ could also be affected by number of interactions, amount of data, digital waste, energy use, etc. not just time

  2. The character/personality of the Assistant would change based in shifts in recognized patterns of user behavior over the long-term. This would allow the Assistant to map its changes in personality to the changes in a person’s life – location, occupation, home life, human character, etc. – growing along with the user as time goes on and new generations of Smart Devices are released.

^Product ‘Generations’ can act in the same way as Familial Generations, with shared history and culture across ‘ members’ and unique interactions within the family. Keeping multiple Smart Assistants over time can create unique behaviors and interactions between multi-generational products.

^Alfred’s character evolves from that of ‘Butler’ to a ‘Mentor’/’Friend’/’Supporter’ role as Bruce’s character and his interactions with Alfred change.

^With Bruce’s development into Batman, Alfred recognizes the changes in Bruce’s behavior and assumes his needs have changed. As a result Alfred’s character/role in Bruce’s life changes as do his interactions and functions.

^This also works because Alfred’s old age prevents him from performing some ‘Butler’ role actions as well as a younger man, but his understanding of Bruce from before he was Batman to now allows him to take on new functions & roles that another ‘Assistant’ would not be able to.

Design Goal: Create a rich user experience that brings personal value to outdated Smart Assistants.

Secondary Design Goal: Create unique interactions and opportunities for users with multiple Smart Assistant product generations.

04/03 Concept Development (from talk with Jonathan 01)

How to connect form + behaviors of Smart Assistant ‘characters’?

Amazon Echo is an uninteresting object that, even if it’s form or material is re-designed, barely warrants any physical interaction. So the character could change with the form or it could change with the behaviors. But the 3rd way could be to create a kind of ‘Intermediate Object’ that represents the changes in Assistant ‘Character’ over time. This could be a secondary physical object (connected to/around the Echo or separate), some kind of digital hologram or poster, an AR object, or some sort of phone app extension.

The other way to do it, which could be either a separate approach or connected with the ‘Intermediate Object’, would be to design a Smart Assistant as a ‘Heirloom’ piece, something that you keep over the years. The advantage with a Smart Assistant is that unlike other static Heirloom pieces, a Smart (Assistant) Heirloom is dynamic as it can grow and evolve with you, changing in ‘character’ from Assistant to new roles as its user changes as well. One of the main questions to answer is, should this still be an Echo re-design or it’s own speculative concept?

Pros of Echo Re-Design: Would have a base/foundation to work with in terms of interactions, behaviors, form. Able to play off of existing assumptions about an Echo. Could have more validity in terms of existing within the ‘Wicked System’.

Cons of Echo Re-Design: Would have to stay within existing form which is very basic and lacking in interaction. Even adding the ‘Intermediary Object’ would then have to connect back to Echo. Would have to connect back to / fit within Amazon Brand & Values, established interaction language, without changing these very much.

Pros of Speculative Concept: Would get to design an entirely new form and interactions without constraints. Would not have to exist so concretely as a branded project, could be more speculative and less limited. (Could still fit with multiple product generations and ‘Wicked System’ aspect by talking about speculative future)

Cons of Speculative Concept: Would have to design an entirely new form, whether physical, digital, or hybrid. Would have to create new interaction language, etc. Could still fit with multiple product generations concept.

Speculative Aspect

This project designs in anticipation of a Speculative Future where Smart Assistant Devices have become ubiquitous, with years of short-term product generations continuously topping each other in performance. This could create an unsustainable pattern of abandoned devices and a lack of experiential value in Smart Assistants now that the novelty has worn off and the convenience has become expected.

To design for this, the goal of the project is to Design a rich alternative (read: non-performance related, untraditional/unexpected for current Smart Assistant experiences) user experience that brings long-term personal value to outdated Smart Assistants. This will decrease abandonment and material waste of Assistant devices as they become progressively outdated in terms of performance & functionality, as well as creating a richer personal experience for users that will keep the value/richness in Smart Home Tech past the date where its convenience becomes the norm.

Concept: Design a personal ‘Smart Heirloom’ in the form of a Smart Home Assistant Device. This makes the Heirloom experience dynamic rather than its traditional static, as the Smart Assistant can evolve its behaviors and ‘character’ with the user over a long period of time.

Ways for Interaction/Behavior of Assistant to change over time

  1. Learn/Create hyper-specific routines

  2. Change Responses/Reactions to certain Speech Commands

  3. Sensory add-ons throughout house, change over years

  4. Learn/Create new interactions/functionalities for Space, User, etc.

  5. Ways of interacting could change such as going from verbal reactions to typed words or vice versa (or go even further, printing out small text fortune cookies-esque messages towards the end of its lifecycle)

  6. Interaction with other smart devices

  7. Some level of autonomy/independence in terms of ‘Character’? Pieces of its own presence rather than just being (so obviously) morphed to your routines and needs.

  8. What level of user control/creation of the character? Are there any tangible interactions that could happen (physical with form, verbal, etc.) that could be a way of directly/knowingly interacting with the ‘character’ of the device?

  9. Scenarios: changes from being left in the sun all day, changes from living in the city vs the country, changes from helping with homework, changes from a new pet being in the house (could break down changes into sensory aspects, can then tech-wise add speaker, light sensor, etc.), changes from new product generations being bought, changes from noise level (like frequent parties)

  10. These changes are examples of recognizing overall patterns in user behavior then impact what role/character it develops into over time, which could be reflected in the content of its verbal responses, what routines it learns, its way of giving responses, interaction with other smart devices/home system, maybe creating new interactions/functionalities for user or home space

  11. Question: Would this be recognizing patterns in user behavior in just interactions with the Assistant, or non-direct interactions like contextual environmental stuff, or stuff that ‘happens’ to it as well as user interactions?

  12. Right now, I think the best would be mostly based on direct user interactions and some stuff that ‘happens to it’, which could also be environmental.

  13. Scenarios: ending role as a memory-keeper and storyteller, who can bring back memories or stories of past aspects of user’s life/routine (impacted by who its ‘Character’/personality traits has developed into)

  14. Question: Is this trying to do too many things? Or is the existing smart assistant typology enough of a foundation for it to work as like building on that base?

  15. Right now, there’s probably two paths:

  16. Focusing on the ‘Memory’ feature, where a smart assistant would over time take on the role of a ‘Memory-Keeper’/’Storyteller’ (the storyteller aspect is really interesting, where it tell your own past ‘memories’ as stories back to you –> “Alfred, tell me one of our stories”). Would have to tie this to physicality of form and/or interaction in order to keep it tied to specific physical devices.

  17. This could also play across larger time scales and user interactions, telling stories across or within generations of family.

  18. Might also be worth asking Jonathan if ‘Character’ can still play a role here? Like making a more static character that shifts from Assistant to Storyteller over time or something more dynamic with more characters that affect how a story is told (changes and evolutions would be more subtle for this one)

  19. Focusing on the ‘Character’ feature, where a smart assistant would change its behaviors over time to take on new roles in response to either patterns in user interaction/behavior or external stimuli ‘happening’ to the assistant device. (or both) The choice between these two is really a choice between user control vs assistant autonomy and what it does for the presence of the assistant in the experience.

  20. The other big thing to figure out for this one would be how are these changes in character reflected to the user? I’ve listed a couple options earlier but could probably only choose 2 at max; 1 visual or form change and 1 interaction/behavior change. The Visual/Form change is a bit more free but would have to be very flexible, able to reflect multiple characters over the course of the product lifecycle. The Interaction/Behavior change has to be very scalable, in that it can convey different levels of character change (minor scaling to major) across time. It has to be able to intensify in levels.

  21. Secondary question: Would the character changes be exponential? As in would the 5th character development be more extreme than the earlier iterations? I think so, this feels like it would be more rewarding over a long period of time.

Another thing to note: Jonathan mentioned having an ‘assembly’ of smart devices. I assume that would be taking new product generations when they are released and like attaching them/pairing them with the existing device to create new interactions /something new (character, form, etc). I’m not really sure right now what new experience/interaction would be created from this, like what the old device could bring to the new one, in either of the two concepts above. But I think figuring this out for whichever one gets picked and having it as an additional piece of the design would be really good. (Would have to try to have it build on the older device in a way that cant just have the old one uploaded to the new one and the device then thrown out)

04/05 “Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design”

Stand-out Challenges:

  1. ‘Difficult to communicate AI system evolvement over time to users’

  2. ‘Difficult to design shared control between AI and the users’

  3. ‘Difficult to explain AI behaviors to users’

^These are interesting to me because they revolve not around the understanding or appreciation of AI to the designer but to the user. AI has become such a ‘loaded’ (not sure if this is the right word for it) term complete with existing cultural expectations and fears, many of which are based on ‘threats’ or ‘promises’ (as discussed in the earlier History of AI reading). So its difficult to design experiences for users that differ from this set of expectations, a good deal of which are also based on fiction, as well as the sheer complexity of AI making it a black box.

Even if you created a wonderful, useful AI product, if users have no clue how it works then they can’t interact or connect with it to the fullest extent, and if it so far off of what they expect from an AI product it may also not connect with them. I think a degree, even a large one, of ‘magic’ (as in not making clear how parts of an experience work) can a enrich a user’s experience but not when its completely enveloped in this magical shroud of mystery. Eventually the novelty wears off and the user is just left with confusion, especially if they want to customize or repair.

One way of doing it would be placing the ‘black box’ of an AI product inside a designed, understandable concept. So for example, if you place an AI product within a robot body that resembles a ghost, there are innate understandings of what the AI could do and what the experience may be like for the user. There’s still plenty of room for surprise and subversion but the base expectations/cultural understanding of what a ‘Little Ghost’ might be like creates an entry point. From there, if the AI has some hyper-complex learning/adaptation component, it can be ‘retold’ in the form of the Ghost character. On the technical side of things, the black box can stay dark while the user’s concept and understanding, and therefore their connection to the product, becomes much more transparent.

  1. ‘Difficult to articulate what AI can/cannot do’

  2. ‘Difficult to design interactions that constantly improve AI performance’

  3. ‘Do not know how to purposefully use AI in the design problem at hand’

^These, on the other hand, are interesting because they relate to the Designer’s confusion on how AI can / should work, or what AI even is. I think it’s actually pretty exciting that AI’s interactions in relation to its ability to improve itself is a major challenge, because that’s really a design over time problem. Leaving the AI in a closet by itself to learn by itself isn’t helpful to anyone but neither is completely obscuring its learning. If the interactions stay the same through the product lifecycle, it won’t matter how much it learns or subtly improves or changes if the user cannot explicitly see or experience it through their interactions.

So the base, foundational interactions need to be loaded with learning opportunities for the AI while still being dynamic and flexible, to evolve with the AI as it learns.

04/11 Planning out work

What needs to get done:

  1. Storyboard of Interactions

  2. Journey Map of Interactions (based around user attachment to device, device role, and physical change of device)

  3. Form iteration with foam, wood, fabric

  4. Talk with Josiah about making form

04/13 Concept Feedback

  1. Interaction is too simple, doesn’t hold enough meaning because it focuses on the minute. There isn’t much value in it taking in this simple data and then spitting it back out at this later date, without doing very much with it.

  2. Need to find a more meaningful interaction/storytelling device –> what is it taking in vs what is it giving back out? What is it doing with the information, what interaction/story is being created?

04/21 Planning Work 02

  1. Conversational + Physical Interactions Storyboard

  2. Write out each separately

  3. Make basic storyboard for each separately

  4. Combine into one basic storyboard as draft

  5. Make more detailed, fuller storyboard

  6. Iterate on form + tangible interactions

  7. Iterate with 3d Models (both the assistant form and the heirloom form)

  8. Print a few for multiple iterations

  9. Choose one, model keyframes for lifecycle and figure out what speaker/electronics would be needed and where they would go

  10. Finalize that model (diagrams/renderings of where each item would be in the form, make systems map with this?)

  11. Print multiple ‘keyframes’ of model (maybe woodshop the base)

  12. Figure out what would be best for final content – stylized animation, thematic storyboards of diff. scenarios, real-world video with models, etc. –> Talk to Dylan about it

04/23 Items List + Corresponding Interactions

  1. Speaker (Woofer + Tweeter)

  2. Mic

  3. Wifi Module (Motherboard)

  4. Controlboard (Volume Controls) with:

  5. National Semiconductor LP55231Programmable 9-Output LED Driver (x4)

  6. Texas Instruments TLV320ADC3101 92dB SNR Low-Power Stereo ADC (x4)

  7. Texas Instruments SN74LVC74A Dual Positive-Edge-Triggered D-Type Flip-Flops

  8. Motherboard (+ Ports) with:

  9. Texas Instruments DM3725 Digital Media Processor

  10. Micron MT46H64M32LFBQ 256 MB (16 Meg x 32 x 4 Banks) LPDDR SDRAM

  11. Samsung KLM4G1FEPD 4GB High Performance eMMC NAND Flash Memory

  12. Qualcomm Atheros QCA6234 Integrated Dual-Band 2×2 802.11n + Bluetooth 4.0 SiP

  13. Texas Instruments TPS65910A1 Integrated Power Management IC

  14. Texas Instruments TLV320DAC3203 Low Power Stereo Audio Codec w/ Headphone Amplifier

  15. Texas Instruments TPA2025D1 2 W Class D Speaker Amplifier

^^^The above will cover the ‘Listening’, ‘Speaking’, and ‘Wake Words’/’Key Words’ (Commands) in the system map, and are mostly already found in the Amazon Echo Dot (4th Gen).

I still need pieces to perform/address the Interactions of: ‘Heirloom ID Code’, ‘Connection btwn. Heirloom and Assistant upon physical touch’, ‘Recording Story’, ‘Storing Story’, ‘Retrieving Story’, ‘Re-telling Story’, and ‘Visual/Physical Change and Indications’.

Process Documentation

Initial Interaction Concepts:




Form + Multi-Generational Use Development:





Interaction Development/Iteration 01:





Interaction Development/Iteration 02:









Systems Map Development:




bottom of page