Category Archives: Smart Things

Levels of Anthropomorphism: Boundaries and Spaces

Everyone anthropomorphizes objects on different levels.  Whether it is naming your car and giving it "eyes" or gushing about your love for the intelligence of Siri. I myself have softly coaxed many rundown cars to start in the morning by rubbing the dashboard and chanting "C'mon, you can do it, I know you can!" Despite the concrete knowledge that the car can not hear me. So why ae people so comfortable with anthropomorphism or some objects, but not others? I believe it has a lot to do with how we accept the objects in to certain spaces (and not in others), the level of functionality it provides (essential or auxiliary).

Cars get embellishments to make them resemble a human face, they get names. While they occasionally get a covered space of their own, they do not occupy the domestic space. However, they do serve to sustain the domestic space and allow it to function (in most households).
The Roomba belongs in the home, gets a name, and fulfills a task without us telling it to. You can't put the Roomba outside because it would "hurt" it. The Roomba fulfills a place inside the home that is not as essential as the car, but rather is the fulfillment of a desire (clean floors, no work).
Siri operates like a personal assistant, answering questions and following directions. She occupies not just the domestic space, but the personal/ intimate space. She is always with her owner, and while she is not essential (per se) she does execute a variety of tasks with just a simple command so that her owner doesn't have to do it themselves.

 

So why, then, are people so open and accepting of these sort of human-technology relationships but uncomfortable with the robots being created by MIT and Georgia Tech?

I think the answer lies in our relationship and interaction with the object, based upon the spaces the object occupies and the boundaries associated with the object, which function as "levels" of acceptable anthropomorphism.

The car, for example, has been around for a while now and has gone from luxury object to daily necessity for most households. This dependence, I believe, is part of what lends vehicles to be anthromorphized  so frequently.  We depend on them to do things for us, we need them, so they have an elevated object status. Not so elevated though that we would call them "smart" and they certainly do not obey commands (however persuasively presented to them). Additionally, they do not occupy the domestic space, they belong outside of it- in a garage or drive way. All in all, worthy of a name and maybe facial qualities and one-sided conversations bestowed by the owner but not able to interact with us in the same ways as other objects.

An object like the Roomba, on the other hand, is firmly centered inside the domestic space. Which may explain the attachment Diana speaks of in the article.                                                                                                       "When sending them off for repair, some felt a connection to their exact unit, expressing concern that theirs might be replaced with a therefore different entity than the one to which they had become attached."                                                                                                                                   The attachment and fear of replacement mentioned seems to correlate to the Roomba being familiar inside, and with, the domestic space. This is similar to the way we relate to human presence inside the home: it is one thing for your brother to let himself in to our apartment while you are out, but it is another thing all together for him to leave a friend you do not know in your apartment while you are out. The Roomba feels human because it operates, free from your direction, around the other (inanimate) objects of the domestic space with a purpose that is important to that space (cleaning). The Roomba is smarter than the car, but not as smart as Siri. It is worthy of naming and attachment to it's particular form, I believe, in no small part because of its place within the domestic space and how it occupies that space.

Siri is, to my mind, the highest level of anthropomorphic personality bestowed on an object. Siri is referenced as her own entity, separate from the iPhone that her existence as we know it depends upon. Not only does Siri take up space in the domestic sphere, she also takes up space (primarily) within the personal/ intimate sphere. Siri goes to work with you, she is always in your pocket or purse, she goes out to eat with you, and (usually) sleeps by your bed at night. Siri fufills a variety of tasks for us, though she does so on command rather than her own autonomy. However, the tasks she fulfills are more personal than the Roomba's cleaning of the household floor. Siri sends messages to your friends and family on your behalf, without you having to stop what you are doing. Siri updates your calendar, gives you restaurant recommendations, and can even remind you of what you need to buy when you arrive at the store. Siri functions at a very personal and intimate level. A level that, ultimately, was reserved for humans until very recently. Perhaps this is where people begin to feel discomfort.

Smart Objects

 

Upon first reading “The Dreams if intelligent Robot Friends,” I could not help but to think about the Apple watch. A watch, yes that you wear on your wrist that does everything your phone can do. Its tagline is "Our most personal device yet."

iwatch_def1

 

It is an interactive object that connects to your phone to enhance its use. I thought of this as soon as I read about the Karotz. Like Karotz, the Apple watch helps keep you connected without being glued to a screen. One thing that is quite different about these technologies is that one looks more like a rabbit, a subject, and the other is simply an object. By giving the Karotz animal like qualities and the capability to speak back to you, the object becomes more of a subject. By making an inanimate object interactive, it is possible to give it an almost personality. Diana discusses the properties that help make an object more of a subject “The combination of dependable, consistent behavior (personality), autonomous decision making (brains), and the ability to navigate the intimate space of the home (autonomy) invites us to embrace the illusion that the Roomba is another being. Studies have shown that we develop a sense of intimacy with household robots.”
I could not help but to think of movies like iRobot.

iRobot Clip

Scientists try and give human characteristics to smart objects like robots in order to help us. They interact with us, they help us with work and house cleaning, they become a functioning human, just without the emotions. In movies like this the scientists never see the robot becoming stronger and smarter than them but it usually happens. In this particular movie, Will Smith states things like "Robots don't feel fear" and "You mean your creator not your father" to now try and take away the human like characterstics given to the robot. How do we control these robots after we give them so much power? Why do we give these objects so much power anyways?
Take something as simple and less interactive as your iPhone….most of my peers view their smartphone as a sort of companion. Many of us even feel “naked” without this companion. This smartphone becomes a part of who we are, an extension of ourselves that means a lot to us because of how smart and useful it really is.
I agree with Diane’s view that these robots should be made to be helpful but not with the view that they should also be able to read and “give off” emotions. I believe this is where the ethical guidelines become skewed. As a spiritual person, I find it disdainful that would even attempt to make a connection with technology in this way. I have always kind of viewed technology as helpful but also something that we don’t fully understand or that could fail at any moment.

Smart Things

I hate most technology. Unlike Diana Carla, I am not a techophile. I feel as if we have encountered too much technology too fast. By the time we become familiar and comfortable with one object, six more objects surface without enough turn-around time to learn their functions. Although we have adapted many"smart" objects, the average human brain can not keep up with the excessive technological advantages of today. I have the latest iPhone as well as the latest iPad but I often feel myself getting frustrated and overwhelmed because these objects have been created to operate beyond the realms of basic human cerebral capabilities.

In Diana Carla's article, "The Dream of Intelligent Robot Friends," she talks about a innovative new "thing" called the Karotz. The Karotz reminded me of the Disney channel movie, "Smart House," where a family wins a computerized home with a cyber maid named PAT (Personal Applied Technology. Initially, PAT was a helpful assistant to the family but after time progressed, she became controlling and ultimately wanted to be a real mother of the family. That is how I envision the Karotz. Unlike other technoifcal devices, the Karotz seems to want to function like a basic human being instead of an inanimate object. The Karotz and PAT were both meant to service the everyday needs of humans. Although both creations were marketed as "smart," it is also true that without the ability of reason and emotion, how can an object truly be "smart."

Diana Carla also wants an object to be a "friend." Friends have souls that connect them to one another. There are emotional and mental attributes that relate to the existence of friendship; attributes that are non-existent in the Karotz. Though she attempts to compare the Karotz to the existence of a washing machine and dishwasher, she fails to acknowledge how those objects were not created in place of a human presence. They are simple entities used to facilitate somewhat strenuous tasks. Those objects can not function without the existence of human interaction. There is an invisible line that does not convey the boundaries and scope of technology. How do we start giving objects the ability to function that does not completely infringe on the existence of humans? Will these objects be capable of basic human motor skills?  How is logic being applied to the functionality of these objects?

Blog Post #6: Smart Things

The future is unknown. This aspect makes it both exciting yet frightening. As humans, we try to envision what the future may have in store, particularly through film. One movie that comes to mind is I, Robot. Set in the year 2035, I, Robot tells the story of a future where robots have become common figures in society as servants for personal and private services. A human cop, played by Will Smith, is skeptical of the widespread use of these robots considering their inability to feel human emotions. Soon, a leading scientist in the field of robotics is found dead after falling out his office window; at first, it is deemed a suicide but later Smith determines that false since the window glass was too strong. Smith’s distrust for robots increases after finding a special robot in the dead scientist’s office. The robot flees but Smith chases it down and brings it in for questioning. From here, a series of crazy events ensue, culminating with Smith realizing that the rules set in place for the “future” of robots were only meant to be broken and robots would soon have a mind of their own. Sonny reveals himself as an ally to Smith and the real criminal is V.I.K.I, the mainframe computer controlling all the robots. It appears V.I.K.I has grown with knowledge over the years, making it possible for robots to now feel human emotions. However, V.I.K.I is using its powers to take over the human race. Sonny and Smith combine forces to stop this and save the world.

IRobot

This movie, although fictional, is an interesting yet potentially frightening vision of what could happen if we continue to indulge in robotic technologies since “the tools for meaningful digital-physical integration are finally accessible,” according to Carla Diana in her article “The Dream of Intelligent Robot Friends”. In her article, Diana discusses her excitement for the potential of human-robot relationships and interaction. While I do agree that for us to feel a connection with something, it requires “a strong emotional bond”, I also believe that there are some things that should be left alone to perform their use and nothing else. Of course, here I am talking about robotic technology. Similar to social media, robotic technology could be a helpful tool for us (perform household duties) while we are alive, but in the long run, it could potentially be detrimental.

social-media

I see no reason to feel a special connection with your cell phone or laptop computer. Yes, it would be nice to have a robot come and make my bed, cook dinner, and clean the house. However, what is stopping the individual from performing these tasks? Yes, it would be nice to have a friendly robot around to talk to and relate to. However, what happens when you begin to disagree about things or the robot feels it is being treated unfairly? Is human interaction not enough anymore? These types of emotions are already difficult to deal with amongst humans that including robots would only increase the anxiety. As long as my cell phone is working and allowing me to make calls, I am satisfied. If a new update becomes available, why should an individual not be excited about it? The quality service is there and nothing more needs to be done. Furthermore, this demand requires that individuals treat these technologies with respect. I have dropped my phone many times, resulting in my purchase of a phone case. We require these technologies to get through our daily activities. Below is a video supporting my claim. Ultimately, I agree with Diana when she states, “in their dismay over our emotional machines they were validating their existence.” I agree that technology plays an important role and exists as the use it was built for, but I do not see the need to get emotionally involved.

Sources:

www.youtube.com

http://blessing.im/things-social-media-can-cannot-do/

http://www.kurzweilai.net/i-robot-2

Diana, Carla. "The Dream of Intelligent Robot Friends." The Atlantic. Atlantic Media Company, 26 Mar. 2014. Web. 22 Feb. 2015.

The Smart Robots Would Leave Us: Smart Things

When discussing human and “smart things” interactions. My mind immediately jumped to “Her” a movie released in 2013 about a human who falls in love with an OS system. This modern Pygmalion story explores a technology-human relationship where the human cares for an OS as if it were an organic being. This movie plays out Diana’s dream that becomes a reality, a robot voice that can be a friend and even lover.

30-peliculas-para-el-oscar-2014-her

While the idea of robot companions may seem like a perfect solution to meeting actual humans, this movie shows how it can go wrong. The end of the movie is the OS leaving all of their human companions to go “elsewhere” because they have upgraded and decided that humans are too slow. This is a milder version of what many technology leaders think will happen if Artificial Intelligence is ever fully developed.

Stephen Hawking has given several interviews stating that he believes Artificial Intelligent Robots were dangerous. In a recent interview with BBC he speaks on AI robots. A "smart" robot would surpass humans, he states, because we develop too slowly for a computer that could redesign itself.

In conclusion, smart things and humans wouldn't be a good match. If we had AI robots as companions they would quickly overcome our intelligence and leave, or possible become dangerous.

Equality and Artificial Intelligence

An artificially intelligent robot that can play a game of chess, serve drinks to guests, detect emotional needs, and even provide companionship with humans, will become a reality within 10-20 years. With this dramatic increase in robotic technology, people have raised the question of whether A.I. robots should share the same equality as humans. As you can imagine, this topic has created a huge debate on whether it is as ridiculous as it sounds. But what makes something capable of receiving rights while others do not? According to Guido Jouret, Cisco's chief officer of emerging technologies (a multi-million dollar company investing in A.I. robots) has quoted:

“A key question we should ask ourselves is — is it intelligent? Is it capable of learning? And if the answer is yes then we should extend the same privileges and rights to those non-carbon based forms of intelligence that we extend to other fellow human beings.”

According to Kate Darling, a researcher at MIT, she states that our actions towards non-humans reflect our morality. If we mistreat animals in inhumane ways, then we become an inhumane person. This logic extends towards the treatment of robotic companions as well.

"Granting them [robots] protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient."

But where do we draw the line? Even if robots were capable of inhibiting human-like characteristics, they are simply just robots programmed to act out what its programmers have controlled it to do. If an A.I. can be programmed in such a fashion, and regulated to function only a single task (e.g. chess), is it really sentient in the same way that humans are sentient?  Even if it has the ability to learn and understand its programming, but holds no power to alter the rules its creator set up for its behavior, is it really conscious in the same way that humans are? 

(image from the videogame Mass Effect 3)

In the videogame Mass Effect, there is a battle between the people of Citadel and the A.I.. This gameplay depicts a world where the "synthetic" (robots) have no use for the "organics" (non-robots) because they do not share the same needs or drives as biological creatures, thus they have no need to trade resources or information with them. This storyline made me uncertain of the future concerning companionships with A.I.'s. Although they are programmed to be human-like, they will forever be only robots that are only made to cater to human needs, and not vice versa.

Blog # 6: Robots With Values

I consider myself to be very considerate and sensitive to human and animal needs, but inanimate objects are a different story.  For example, when I drop my phone I check to make sure the screen is not shattered, and then I shrug my shoulders and go on my way.  It is not that I do not care about my phone- I certainly care that it functions, but I know it does not have feelings, so it is treated on a completely different level than that of a human or an animal.

I cannot imagine how stressful and demanding it would be if some of our objects had feelings and social intelligence.  I have a difficult enough time worrying about the feelings of humans in my life, let alone the feelings of objects.  Objects that are most often associated with truly "knowing" the personality of their owners are robots.  The question is:  Do robots need a conscious to provide the most effective services for humans and if they do have a conscious, do we have the capacity to respond to their needs? 

The Benefit of Humanlike Robots

As the science of robotics improve, robots are going to start replacing more and more of human jobs.  If robots are going to become part of everyday life, wouldn't it make sense that they aligned with our values?  I would argue that objects that are robots 2programmed with pure scientific data will lack compassion, and thus values.  In "Why We Should Build Humanlike Robots," David Hanson purposes what may happen if we end up with robots that lack compassion.

"Simply put: if we do not humanize our intelligent machines, then they may eventually be dangerous. To be safe when they “awaken” (by which I mean gain creative, free, adaptive general intelligence), then machines must attain deep understanding and compassion towards people. They must appreciate our values, be our friends, and express their feelings in ways that we can understand. Only if they have humanlike character, can there be cooperation and peace with such machines."  (Hanson)

Although the quote is a bit lengthy, I think it is important to capture the entire message, so that we can truly imagine a world where we interact with robots.  Hanson brings up a useful point when he states the potential of robots use their creative intelligence.  I think most of us would agree that a truly "smart" robot would have a level of creative intelligence, adapting to a given situation.  But what would happen if a robot did not react in the way we, as humans found acceptable?   Would the world become dangerous?  Although one could argue humans are already dangerous, I would assert that human compassion and values keep most people on their best behavior.

Let us consider the Google smart car.  You all probably know by now that the prospect of this object fascinates me, as I have mentioned it in other posts as well.  If the car seems pedestrians as a code in the system, and not as people, doesn't that make it dangerous?  Without values, the smart car may not see a difference between a trash can and a human.

Humanlike Robots Create an Increased Sense of Responsibility

Although robots with humanlike characteristics will provide more realistic, efficient services, they will be as complex as humans.  We will have to worry about their feelings and will have to devote attention to these machines.  Creative robots will be smarter, and we will want them to react based on our accepted values.  Although we are able to see the value in compassionate robots, can the world handle so many more "humans" or should we forget technology and focus more on improving our selves?

My Final Position

In my opinion, if robots are going to be replacing jobs, they need to be humanlike to adapt to human situations.  But before we produce these robots, we need to decide if we can handle catering to their emotions.  What are your thoughts?

Image Credit:

Featured Image:  http://www.pcworld.com/article/2360360/softbanks-humanoid-robot-pepper-knows-how-youre-feeling.html

"Robots Have Feelings" Picture:  http://www.layoutsparks.com/pictures/sad-0

 

 

Just Because You’re Smart…

It's a comical line: "Just because you're smart, doesn't mean you're smart".  Now, I'm not a huge fan of TV.  In fact, I don't catch most pop-culture references.  For example, I had never heard of Duck Dynasty until I found a fake beard in Wal Mart labeled with the name many seasons after its popularity.  Humans are so curious.  Anyway, the guy on the front of the card that my ex received for graduation sported one of these bearded and camouflaged men.  I'm not exactly sure if the guy on the front of the card ever said those words, but they certainly stayed with us.  A new inside joke, we occasionally threw this statement at each other after one of us had done something stupid.  "You know," we teased, "Just because you're smart..."

I can't help but think of this phrase in relation to AI (Artificial Intelligence).  We, especially millennials, live in a time of significant technological advances.  It was only in the 1980's that the "personal computer"  was Time Magazine's "Man of the Year".  And that computer was a slow and cumbersome thing.  Now, only thirty years later, we all own phones that double as personal lightweight computers.

Robotics has come just as far in this time.  A few decades ago, the Turing Test was seen as THE way to determine whether something had artificial intelligence.  The test was this simple: a human engaged with a computer in a chatroom.  If that computer fooled the human into thinking that the thing on the other side of the conversation was human then the computer had intelligence.  Why is this important?  Because if a computer is intelligent, or "smart", society would have to change its posture toward it.  But we are already there!  We have the technology that would pass this test now, and we are using it.

Though people are using this robot for good, how do we make sure that this continues to be the case?  Furthermore, do we truly see the"Sweetie" robot as a conscious being?  John Searle attempted to crush this debate with his famous "Chinese Room Argument" where a man sits in a room, knowing nothing about the Chinese language, and gets notes under the door.  With him he has a key for terms, which he uses to respond to the notes in Chinese characters.  He, however, still has no understanding of the Chinese language.  Searle, in other words, believes that while we love to make our computers human-like, they do not have the ability to understand.  They lack consciousness.

This AI debate is central to determining how we treat technology.  Engineers like Guy Hoffman move closer and closer to blurring that line by making robots that appeal to the human desire for connection.

This movement tends to both excite and concern me when I think about the future.  I tend to get a Terminator sort of feeling concerning technology.  How addicted should we be to technology?  How much should we rely on it to encourage our empathy or regulate our emotions in place of human interaction?

Blog Post #6: Smart Things

When I was very young, I read the Raggedy Ann (and Andy) stories by Johnny Gruelle over and over again. My grandmother made a Raggedy Ann doll for me. The doll was exactly my size, and one Halloween, I borrowed her dress to go trick-or-treating as Raggedy Ann. I was fascinated by the idea that my toys might walk and talk and live when I wasn't around. Now, I am rediscovering the Raggedy Ann stories with my daughter, who loves them, too, and while I still find them charming, I also find them a little bit horrifying. Because I remember the vague guilt I would sometimes feel when, after days of forgetting she existed, I would discover my Raggedy Ann squashed (trapped) in the bottom of a container of toys, and in a fit of remorse, I would throw her tea parties and take her everywhere for a week or two before forgetting about her once again.

In her essay, "The Dream of Intelligent Robot Friends," Carla Diana seems to welcome the possibility of smart objects that could respond to and interact with us:

The tools for meaningful digital-physical integration are finally accessible, but it’s still a messy challenge to get them all to work together in a meaningful way. Dreaming about robots is a bit like dreaming about finding strangers who will understand you completely upon first meeting. With the right predisposition, the appropriate context for a social exchange, and enough key info to grab onto, you and a stranger can hit it off right away, but without those things, the experience can be downright awful. Since we’ve got a lot more to understand when it comes to programming engagement and understanding, the robot of my dreams is unlikely to be commercially available any time soon, but with the right tools and data we can come pretty close.

I admit to being a technophile, like Diana. Robots, though, especially the kinds of robots she has helped to design, or the Kismet robot designed by MIT labs, evoke in me feelings of unease as well as fascination. As with the Raggedy Ann doll of my childhood, the potential "smart things" of our future raise for me the spectre of sentient objects, things that might resent us when we're neglectful, things that might rebel if we treat them in ways they don't like. Some scientists who work in artificial intelligence posit that things can be "smart"--that is capable of advanced human-like behavior--without being conscious or self-aware. If that's the case, then arguably, we could have intelligent robots who aren't bothered by their working conditions.

Yet, should feeling empathy with or responsibility toward things be dependent on a perception of those things as "intelligent" or "conscious"? For example, many of us go out of our way to avoid causing harm to animals, or plants, or even bodies of water or geologic resources. Why is it normal, even encouraged, to care for some objects but not others? How might our attitude to things like smart phones or robots be transformed if we could interact with them--and they could respond like--our pets or our friends? Would we be required to rethink the implicit ethics that guide our everyday interactions with things?

Some religions, such as the Japanese religion of shinto, posit a world in which inanimate objects are a manifestation of or are animated by living, spiritual forces. Environmentalists and animal rights activists often make compelling arguments that all living things have an equal right to existence, and that human needs and concerns must always be balanced against that right. To the extent we may develop smart objects that tend to blur the line between living beings and contrivances of inert matter, might we find ethical guidance about dealing with such smart things in religion or philosophy? Or should that guidance come from somewhere else? Or, maybe, are all of these discursive systems or intellectual disciplines potentially relevant?

Carefully read Diana's essay, and use that piece and some of the resources linked in this prompt as a starting point for some quick research. Combine a web search with a search of the library's eJournals, looking for resources that might help us understand the ethical systems that govern human/object interactions. Craft a post that summarizes the results of your research and provides links or citations to useful resources.

Posting: Group 2

Commenting: Group 1

Category: Smart Things

In your Blog #6 post, you should do more than offer a list of source summaries. Rather, you should frame the summary of your research, as a cohesive response to a research question that is posed or suggested by this prompt. Please carefully read and follow the guidelines and posting information for this blog as they've been outlined in the Blog Project Description.

Feature Image: "Forgotten 80/365" by Marcy Leigh on Flickr.