Why Do We Need a Robot If We Already Have Smartphones?

Recently, at a robotic forum held by Shenzhenware, one of the central questions discussed was: Why do we need a robot if we already have smartphones?

Definitely, this is a good –albeit harsh- question that anyone who works in the robotics field should think about.

As a product manager of a personal robot, I have my own opinion on it:

The first obvious difference is robot brings a new interaction method. Not only the voice, but other interactive elements are combined to make up a new experience, one that is completely different than using a smartphone.

The interaction should be bidirectional. In order for a robot to feel closer to its human owner, it should be able to understand us, our feelings and the surrounding environment.

Take voice interaction for example: The way in which robots communicate with us is important, but it’s more important to ensure that the robot is able to understand our commands properly, by analyzing our voice’s tone and intention.

As for the body language, Japanese robotic company Softbank uses upper limb actions to help Pepper express herself; NXROBO chooses BIG-I’s eye, including the UI in pupil and the movement of its eyelid, to show different expressions. These are different ways to tackle the same problem. However, there is still a more urgent problem waiting to solve: How to make robots recognize and understand our body language in a better way.

A good interaction should be natural, imperceptible, and unintentional.

However, to design a product is a deliberate and creative process, in which we need to think and try different things in order to create a natural, imperceptible and unintentional feeling. Something that I like to call: “Deliberately non-deliberate”

Only when there exist active perception, is it possible to realize non-deliberate and natural interaction.

Picture the following scenario: We need to turn off the light when we go outside.

If the light switch is a physical button, we reach for it and press it to turn it off. However, what we really want is to turn off the light, not to touch a button

When it comes to a smart home, the physical button is replaced by a virtual button in a smartphone app. Still, we have to take it out from our pocket, open the app and then click the button. We do more steps, but we also get the benefit of being able to control the light remotely.

Right now, we are in the era of voice control, in which a voice command has taken the place of the button. I only need to say something like “Turn off the light” or “goodbye!”. Obviously, this is easier and more convenient than before but it’s not natural enough yet. We still do something unnecessary in order to turn off the light.

Let’s get back to the original issue: The light needs to be put off because we leave and we don’t need it anymore. So, the most natural way would be that, when we leave, the robot will judge whenever you need the light or not, then it will make the decision and execute it. You no longer have to do any unnecessary steps.

Of course, I never say that voice interaction is not natural. I mean, all the interactive elements, including voice commands make natural interaction possible. Some people believe that voice interaction equals natural interaction. However, active interaction through a robot can be more natural, although it is, in fact, an entirely different thing.

Another interesting example: A smart toilet. When the temperature is low, it should heat up its surface until it’s warm enough for a person to use it comfortably. If it’s hot outside, it should stay normal. When I’m done using it and flip the pedal it will automatically clean itself. This is a good example of natural interaction. This is a case of active perception and interaction. One that doesn’t need voice commands or dialogue. Could you imagine how it would sound if it featured dialogue?

“Master, would you like to heat up?”

“Master, have you finished?”

Even more,

“Master, would you like to hear a joke?”

“Master, may I suggest you a new restaurant ?”

I would say, that wouldn’t be a very good user experience—because I  would try my best to never interact with it again.

The second difference: A robot is a combination of perception and movement.

Perception is the basis of interaction. For a robot, perception relies on various sensors. As we know, each sensor has a limited detection range. However, movement allows us to expand the robot’s scope. When a robot is able to turn and focus, this greatly increases the normal limitation of sensors. And if this robot is also capable of moving freely, its ability to perceive our environment becomes really powerful.

Thus, movement provides more possibilities for robots. That’s why we think that a robot is more than just a combination of a smart phone (or tablet) plus a shell and wheels.

So, what else can a robot do for us that a smartphone can’t?

Here is a story I told inside my company:

At first, people woke up using the rooster’s crow at dawn. But this means that I’m waking up by the rooster’s schedule, not mine.

Then people invented alarm clocks, which allowed us to decide when to get up. But another problem appeared. During weekends, in which we can stay up in bed until late morning, we were still being woken up by the alarm.

This problem was solved with the arrival of smartphones. We could program it to activate at 7:00AM on work days and 9:00AM during weekends.

But a new problem appeared: I don’t always wake up at the same time. Sometimes, I get up earlier and while I’m sitting on the toilet an interesting idea appears on my mind –and then it’s cut short by an annoying alarm ring.

 

Today, with the arrival of robots, we can use them to detect and judge if I am already awake by using motion tracking and facial recognition. I can even program it with a simple order: If you see me lying on my bed at 7AM, please wake me up.

So, there are two main points in this story:

  • Robots can do more than just wake up. Due to its abilities of interaction, movement and perceptions, robots can give an entirely different user experience beyond our imagination; in the same way that we couldn’t foresee what the personal computer revolution of the 80’s would bring us.
  • Human beings are constantly changing, evolving. And it would be amazing to have various solutions that would adapt to our different lifestyles and the various situations that we experience in our daily lives.

Thousands of years ago, humans were woken up by the roosters; 100 years ago we used alarm clocks, a decade ago using a smartphone, and today it’s time to talk about personal robots. That happens because we are human beings, and we look forward to change and evolve.

Share this

Leave a Comment

电子邮件地址不会被公开。 必填项已用*标注

%d 博主赞过: