top of page

Intelligence in Environments | Dorcas Lin

Reflection | Molly Steenson – AI Impacts Design + Architecture

What I found most interesting about Molly’s talk as she followed the stories of 4 non-traditional architects is the development of design tools and systems that are now applied in the field of Robotics and AI. For example, the patterns of design is frequently proposed in the field of robotics to create a foundation for setting expectations of what “successful” robot behaviors look like or should fulfill. Overall, my biggest takeaway is that designers, we can use AI as a form of non-traditional data curation/collection to change our perspective of how we look at something that’s compelling to people in our current state

Reflection | AI Anatomy, Evolution of AI in the U.S., Enchanted Objects

The evolution of AI in the U.S. was a compelling read because it surfaces multiple issues behind how we conceptualize and talk about AI as if it’s a thought experiment and not a real possibility. We treat potential scenarios of threat and technology development as situations out of our lifetime rather than safety issues to look into now before it becomes something out of our control. While unnecessary fear-mongering isn’t the way to go, we should have an appropriate emotional response especially when the race to the perfect AI across multiple nation states is inextricably linked to war and is probabilistically contingent on some company in Silicon Valley. When a tool like AI is like a winner-takes-all race, it’s about efficiency and not quality and what’s easiest to do will get done first, which almost always isn’t safety. But in order to avoid or decrease some of the risks behind the expansion and growth of AI, there has to be safeguards embedded within the systems, even if our AI is dumb right now.

I enjoyed the categorization of the future trajectory of objects in the Enchanted Objects reading. While it’s nothing too eye-opening, it’s nice to put the behaviors into words to begin to contextualize the scope of the problem space. I think what was most compelling was the idea of shared and not owned objects; this is supported when looking at how COVID-19 has transformed industries like food and delivery with the rise in Ghost Kitchens in the U.S.

I really enjoyed the AI Anatomy reading to balance out the other resources we were given, it emphasizes the point of how AI is just repeating current “normals” and is a black box in terms of where everything comes from or is outsourced from. It reminded me of The Good Place final season where they found out that no one has gotten into the good place in 500 years because the indirect traces of unethical labor and consumption has seeped into everything we do like buying flowers. I like how it broke down each part of the system and rooted it in history tangentially and makes me think about how else to accurately visualize a system that’s created in an environment like the U.S.

System Diagram | IoT Language Understanding and Generation

Research Questions

  1. We outsource more and more tasks to AI, but how do we know that the decisions AI makes on our behalf are aligned with our beliefs and values?

  2. What are our core beliefs?

  3. How do AI with different beliefs and values clash or mesh

  4. What are the “right” or “wrong” beliefs and values.

  5. In the future, who do you call when smart things break?

  6. Who do we call when things break? How is this different across products?

  7. When your

  8. How do we experience AI as indirect users?

  9. How can intelligence reframe capitalism to generate distributed prosperity and respond to the challenges that threaten us as a civilization: major climatic, technological, geopolitical and demographic changes, among others to drive us towards achieving our long overdue 2030 SDG goals?

  10. What does AI look like as a generative tool in collaboration with people?

  11. How does intelligence change our rituals from an individual to collective level?

  12. How can intelligence give insight into how the future feels when one way of looking is overtaken by another?

Case Studies (Ongoing)

Case study #1 IDEO HyperHuman “Belief Checkout”

Description: “The Belief Checkout’s shelves are full of products that represent values. Pick the ones that reflect you to help steer the supermarket’s algorithm. Say you value sustainability. While eating red meat may not sound like a sustainable choice, eating an overstocked steak might be. The supermarket can help you make choices that square with your values. We let go of control, but our beliefs stay in tact.

Case Study #2 Supervised Machine Learning Trainer

Description: “We believe that mundane maintenance jobs are not just going to disappear when our cities adopt machine learning driven technology. Behind the algorithms and machines are human decisions and biases. We believe that a new class of blue collar jobs such as photo tagging and data set generation for machine learning algorithms will become prevalent. The SMLT is an industrial grade controller that allows a maintenance person to re-train the smart camera by recording new examples in real time. The future maintenance worker will teach the camera what it’s seeing and curate the training data set. He/she will help the camera learn the difference between people and objects and decide who should be classified as an upstanding citizen or a petty criminal.”

Case study #3 Exoskeleton for Overhead Work

03/31 Final Research Question

Internet of Broken Things | In the future, who do you call when smart things break?

When we imagine the future of intelligence, we’re often excited by the mystical unknown on the “cutting edge of innovation.” We fantasize of IoT and AI’s application across all fields until it’s a ubiquitous presence dominates smart cities, augmented body parts, and everything in-between. But what about the logistics that carry us into that smooth-running future? Who accounts for the technicalities and infrastructure of the very likely possibility of your smart thing malfunctioning? There’s not enough Mark Zuckerberg’s or Steve Jobs’s in the world to fix everything.




Mark’s tea kettle breaks, whole house breaks.

Questions to Consider:

  1. What do current blue collar jobs lost to robots and AI in the next 5 years transform into the future?

  2. Would the body of knowledge necessary to fix our smart homes and smart devices be monetized by corporations or would it be made transparent to everyone (i.e. fixing your IoT would be the equivalent or learning to change your tire now)

  3. What does the transition point look like where plumbers and technicians don’t know how to fix a customer’s smart toilet in the near future?

  4. What does it look like if software engineering and coding was common knowledge like a first language or learning habits

  5. Do we have the right emotional response to how AI is progressing?

  6. Class in relation to blue collar work…how does that relationship change over time?

What’s the culture behind how we repair? | Past and Present

  1. Repair of static things (i.e. chair, bed frame)

  2. Repair of analog devices (i.e. watches, clocks)

  3. Repair of computers and phones (i.e. Apple Geniuses, third-party vendors)

  4. Repair of larger components (i.e. cars + body shops, send back to manufacturer)

03/31 Case Study | How it Works

“Bots” by Kevin Gaunt

“Bots” is a speculative project that explores a future where such narrow artificial intelligences have become as mundane as our fuseboxes are now. They are programmed to perform a single function only (e.g. online shopping, spying on the neighbors whereabouts or organizing surprise presents). Gaunt creates a scenario for Bots as an intervention to better take care of senior people ageing in-place.

The system has 18 bots with different functionalities and personalities, a main control unit that houses the bots in operation, and a set of ‘speaker’ units that are meant to allow for interaction in every room.

Bots Futures Framing:

  1. Speculation on AI commonplace and its role as a caregiver; what parallels to Life Alert and elderly care devices in the 40 years?

  2. Bots system is expansive into other roles of gift-giving, anthropomorphized intelligence, communication with similar AI systems of the future.

  3. Perpetuates current habits and cultural norms of providing elderly with “easy tech”

  4. Does AI as a role of a companion worsen elderly isolation?

What I find to be most compelling in this project is the idea of how these narrow AI communicate and talk with one another. While this sort of “bot to bot” communication exists today within Discord or Slackbots, how can that feature be expanded upon and be characterized into a physical system for elderly care.

Process was led by “artistic research and finding unexplored opportunities rather than about discovering a practical design problem.”

Interaction Breakdown

  1. The Bots

  2. Bots communicate in a language understandable to us humans. But compared with today’s virtual assistant they only understand things directly related to their speciality. Over time, bots learn from their past behavior if they are praised or critiqued. This can happen through human interaction or through other bots that possess this functionality. The bots are customizable to every need and adhere to their functions.

  3. The Brain

  4. The bots are mounted on “the brain.” This is where the user directly interfaces with the bots and they can see the chatterflow of how the bots communicate with one another in order to establish trust for the user who can see conversation history and troubleshoot as necessary

  5. The Senses

  6. A Sense consists of a loudspeaker and a microphone and allows bots to be addressed in rooms different from where the main unit is. Each Sense can also have additional embedded sensors (e.g. a camera or a movement sensor)

  7. Privacy Controls: Inspired by how people use Postits or tape to cover up the camera on their laptops, each sensor in Sense also comes with a signal blocker. Place the signal blocker in-front of the sensor in situations where more privacy is needed. The sensor blockers attach magnetically to the sensors and the side of the device.

Bots – Wizard of Oz

Concepts

“Designing the Behavior of Interactive Objects” Personality Traits:

  1. Openness to experience

  2. Conscientiousness

  3. Extraversion

  4. Agreeableness

  5. Neuroticism

Concept #1: Evolved Occupations: Tools for Troubleshooting

  1. 1-2 years

  2. A training or how-to guide that transitions current blue collar workers into technical

  3. Personality traits: patient, uplifting, respectful

  4. 3-5 years

  5. Toolbox with a special set of tools (does this include fake training people and props?) that support workers to fix smart streetlights in smart city AI. Blue collar work of the future? (ex. augmented exoskeleton mentally and physically)

  6. Personality traits: confident, disciplined, assuring

  7. 7-10 years

  8. E-games tournament venue, who builds it, who fixes it, who takes it down?

  9. Personality traits: exhilarating, dangerous, sketchy

Concept #2: Physicalized Broken IoT

  1. When your smart device breaks, does it tell you? Does it mourn?

  2. “Please! You have to fix me! It hurts!”

  3. How do you detect when something has gone awry

  4. Chatbot that’s aggressive

  5. Personality traits: unpredictable, childlike, volatile

Concept #3: New Services to Care for Smart Things

  1. Medicine + Love languages: How do you give your smart things TLC?

  2. Make concoctions/treats = code bits/code cafe

  3. Tasting menu

  4. luxury IoT services

  5. Almost like a veterinarian

  6. Personality traits: soothing, loving, charming

4/1/21 Storyboards

01 Tools for Troubleshooting

End product: a series of artifacts/tool representing specific scenarios of “evolved occupations” and what sorts of tools are now needed to troubleshoot smart devices.

Occupation: Smart Home Device Trainer Artifact: Training treats for a broken smart vacuum

Occupation: Smart City Technician Artifact: Multiple fake human bodies to make sure AI recognition works properly

02 Internet of Broken Things in Pain

4/6/21 Internet of Broken Things Project Description

Background: This project assumes a speculative future where smart technology is embedded in all aspects of work and home living and is as essential as electricity, refrigeration, or plumbing. From voice assistants to self-regulating buildings to intelligent street lamps, there are 127 new IoT devices connected to the internet every second, but the tradeoff of a growing smart technology grid is the larger surface area and potential for a breach and attack. There’s also a concern around the formation of a networked intelligence black box that few and far in-between know how it works. So, how do you know when your smart thing is malfunctioning and who do you call when your smart things break?

Even in a perfect society, someone has to take out their tools and fix the broken things.

Goal: Empowering individuals. The goal of this project is to create a teachable moment by framing the repair and maintenance of our intelligent systems to be as easy and accessible as the repair of current objects like tightening a loose screw or changing a lightbulb. This project seeks to empower people to take control over their smart home systems and learn about security in a tangible and intentional way through the natural movements and associations we perform each day.

4/11/21 Data Manifestation Aero



4/19/21 Concept Update Assistant to Help You

Research Question: how might we prevent unwanted data collection and accumulation from our smart devices while regaining access and control over our information in an environment where we are connected by a growing technological black box?

Notable Quotes: “OH I DON’T HAVE A VOICE ASSISTANT BECAUSE I FEEL LIKE IT’S ALWAYS LISTENING.”

Defining Broken: Devices doing something we don’t want, either to our knowledge or invisibly

Concept: ____ is a specialty assistant that keeps tabs of when your voice assistant is listening to you without your knowledge; this specialty assistant physicalizes the information collected on you while allowing you to delete the information you want through a characterized form.

If your Amazon Echo is a characterization of you, Bonk is a characterization of your Echo.



3 features:

1. Recognize what data is being accumulated and detects when Alexa or voice assistant is listening to you without a queue 2. Shows you what data/information is being kept about you 3. Let’s you discard or keep what you want (the printed hair logs)







4/21/21 Progress 3 attachments

I realized that I needed to refine my questions down more because I’m trying to do tooo much with this one artifact that it’s not actually achieving any of the goals well. I mapped out the problem area to be focused on intimate data specifically

From here, I started brainstorming the artifact in which this would actually fulfill each of the 3 forms of intimate data that I narrowed into.

I wanted these characters to be personified because of how rigid and stiff Alexa and current voice assistants seem. This contributes to how guarded people view their voice assistants with their data. The personification and whimsicality of the attachments hopefully alleviates some of the tension between the voice assistant and the owner.






Arms clamping over ears

Look and feel:

4/26/21 Progress Problem Statement + Form updates

Refined Goals/Problem Statement: How might we understand and shield the accumulation of intimate data by our voice assistants in an environment where we are connected by a growing technological black box?

More about shielding and understanding the system versus having access and control. These objects are more for alerting than it is for controlling your content.

Before I also had each attachment print out a sheet of paper as it accumulates so you could watch your data accumulate. But this leads to more risk as that data is physicalized and now the user has to take the extra step to shred it which makes the problem worse. So, I decided to limit that and think about if it is an alert system, how can the features differ and be more ambient versus physical?

  1. Stickybeak: alerting when Alexa is prompted without queue. Just has to let user know that it’s listening so they can move away (typewriter sounds)

  2. Salutarius: prints out like health report of the content Alexa is collecting

  3. Amore: Relationships, how can “mood rings” or the concept be applied to mapping out what your assistant thinks you feel about your relationship.

Is the idea of it being 3 separate objects unrealistic? Like they can only prioritize one type of data collection? What if it was all in one since the outputs are now different?

3D model of personified assistant





bottom of page