Trending December 2023 # Do You Think Google Glass Is An Intrusion Of Privacy? # Suggested January 2024 # Top 21 Popular

You are reading the article Do You Think Google Glass Is An Intrusion Of Privacy? updated in December 2023 on the website Kientrucdochoi.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Do You Think Google Glass Is An Intrusion Of Privacy?

Now that Google Glass has been available more broadly, we’ll start to see it on more and more people in public. This means you could walk into a restaurant or library and have someone browsing, watching, recording right there. Is this an intrusion of your privacy?

Google Glass wearers have been somewhat limited until now, as only those receiving an invite from Google were allowed to buy the device for $1500. But after Google opened up the sale of the device to everyone for just one day, on Tax Day no less, we’ll start to see it turn up in all sorts of situations. We won’t know whether the wearers are simply just wearing it, or if they are Googling directions to their next destination or even recording us. While some people are quite excited about the release of this device, others are put off by it, seeing it as an intrusion on their privacy. After all, these devices were for sale to everyone, not just people who had the good morals to not record people without their knowledge. There’s already been one violent act related to Google Glass, as a wearer reported getting it ripped off his head and smashed.

What do you think? Will you be bothered walking into a business and seeing someone wearing Google Glass, not knowing how they are using it? Or does it not bother you as you are wishing you had the money to buy one of these devices as well?

Do you think Google Glass is an intrusion of your privacy?

Image Credit: Tedeytan and Loic Le Meur

Laura Tucker

Laura has spent nearly 20 years writing news, reviews, and op-eds, with more than 10 of those years as an editor as well. She has exclusively used Apple products for the past three decades. In addition to writing and editing at MTE, she also runs the site’s sponsored review program.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

You're reading Do You Think Google Glass Is An Intrusion Of Privacy?

Google Project Glass: Siri Or Clippy?

Google Project Glass: Siri or Clippy?

“Your scientists were so preoccupied with whether they could” Jeff Goldblum memorably said in Jurassic Park, ”that they didn’t stop to think if they should”; has Google done the same with Project Glass? Initial reactions to the wearable computing concept shown off publicly yesterday were predictably gobsmacked, the eye-catching demo video showing an idealized and alluring view of augmented reality. After the dust has settled, though, comes the question: is Project Glass Google’s Siri, or is it actually more like Microsoft’s ill-fated Clippy?

Some of us were quickly on-board, offering to open up our wallets to whatever Google wanted to take in order to get our hands on the wearable display. Others have been more reserved, wondering whether the AR system can deliver what Google has promised, and if so whether that’s something we’d actually want in our everyday lives.

Tom Scott, for instance, recreated Google’s concept video with a rather more cynical slant (be warned, some moderate profanity in the first couple of seconds), warning of what might happen if our reality gets just too augmented:

More serious, though, are the questions around practicality and privacy: can Google really deliver a user experience anything like its glossy promo, and even if it can, do we really want the search giant piggybacking on our everyday lives even more? Technical details, as we’ve already observed, are in short supply from Google; the slender prototypes in Google’s press shots are described as “design studies” with no indication as to whether the test hardware is anywhere near as minimal.

Practical experience with actual wearable displays from Lumus suggests Google’s UI mockup may not be quite what we can expect from the real deal. Single-eye overlays aren’t the issue – it’s actually relatively straightforward to incorporate extra information from one eye into your overall vision – but the amount of light coming through from the outside environment. That could potentially wash out the sort of pale, detailed graphics Google has shown us; bold strokes and wireframes generally work better.

Google Project Glass concept demo:

It’s speed and accuracy that is prompting the most questions, however. Microsoft’s maligned Clippy incurred the wrath of Office users because most of the time it got it wrong: sluggish, unhelpful and generally annoying, it failed the primary benchmark for a digital assistant, by actually detracting from usability.

In contrast, Apple’s Siri digital personal assistant on the iPhone 4S is useful because – although its palette of commands is relatively small – it generally gets things right. It adds to the usability of the device because it adds a new avenue of interaction, unlike Clippy with its attention-distracting and lackluster functionality.

Google needs to make sure Project Glass reacts swiftly and accurately if it wants users to don specialist eyewear. It also needs to make sure that user-expectations are in line with what’s capable of being delivered – not showing outlandish concepts if the practical implementation is significantly different, something which can turn off even the most enthused of early adopters. Finally, it needs to be upfront about legitimate fears around privacy and data protection, particularly when the reality that’s being augmented consists of plenty of people that haven’t signed up to the Project Glass terms of service.

[polldaddy poll=6111008]

Google Glass Creators Talk “Staring” And The Social Implications Of Wearables

Google Glass creators talk “staring” and the social implications of wearables

As Google Glass continues to be a unique sort of hardware / software platform in the industry, so too do the creators of the wearable computer stay hot commodities for question and answer sessions. In the feature you’re about to see, two members of the main Glass creation and development team discuss the social etiquette involved in the creation of the platform. Steve Lee, Product Director for Glass, and Charles Mendis, Engineer on Glass, spoke up during a fireside chat at Google I/O 2013.

As we saw a relatively large amount of users working with Glass at this year’s Google developers convention, it only made sense that sessions featuring the team that made the device would be rather popular. Every single one of the chats surrounding Glass were packed to the brim with developers, press, and the like. Steve Lee let it be known how the device began its life as a wearable.

Steve Lee: I’ve been on the project since the beginning, so almost 3 years now. So I have a pretty good view of how the product has evolved and what the team philosophy has been. And I can definitely say that from the beginning, the social implications and social etiquette of Glass, of people wearing Glass, has been at the top of our mind in how we design and develop the product.

And not only thinking about the people who will buy and use Glass, be wearing Glass, but really everyone. And the social implications for those around the people wearing Glass. 

So I’m proud of the team for taking it seriously and thinking about that early on. 

Lee: Lemme just give a few examples of how the design has evolved to accommodate some of that:

One is that the display is up above the person’s eye. We learned that very early on – some of our very early prototypes actually covered your eyes. We soon discovered how important human eye contact is with people around you. 

Lee: Another common question is about distractions – is Glass distracting? How do I know if you’re paying attention to me? Typically people who are asking that haven’t been around someone with Glass yet. Because once you’re around someone with Glass, you’ll know that they’re paying attention to you. Because they’re looking at you – you have eye contact. 

And if you’re looking at Glass, you’re looking up at the display.

So that’s one example of design. 

Lee: A second thing is with the camera. A lot of the privacy questions are around the camera and picture-taking and videos. That’s why to start recording a video, or to take a picture, the way you do it has clear social cues – a social gesture.

There’s a camera button right here – so I have to raise my hand, press the button to take a picture. Or speak to Glass, and say “OK Glass, take a picture. Or OK Glass, record a video.” So it’s a clear cue to people around you. 

The third example that I like to give is – and I want to really educate people about this about how Glass works – is that when Glass is active, the display lights up. It lights up not only for me to see, but for other observers to see that as well. 

So right now you can’t see a light emitting from Glass because it’s not active. So because of that, you can rest assured that I’m not recording you right now.

Q: Well it could be hacked, right?

Lee: Yes, that’s on Charles.

We take the trust and reliability of our software very seriously, like all other teams at Google. So by design, that’s not intended. Our design is to ensure that the display is active when Glass is active.

And that’ll be part of our SDK – our GDK – it’ll also be part of our policy. So applications won’t be permitted that don’t do that. 

Charles Mendis: Just to add to that, on the social cues – if I’m recording a video of you with Glass, I have to be staring at you. If I want to be recording you, I have to stare at you. And as a human being, if someone is staring at you, you kind of notice.

Like a girl’s walking down the street, and a guy’s just like. *STARE*

So I definitely think there’s going to be social cues. And again, like in the restroom. If you walk into the restroom, even without Glass, and somebody is just looking at you… I don’t know about you, but I’m going to get the heck out of there.

Humans Have A Better Sense Of Smell Than You Think

“Most people with a healthy sense of smell can smell almost anything that gets in the nose,” said John MacGann, an associate professor of neuroscience at Rutgers University. “In fact, there used to be whole field of trying to find odors that people couldn’t smell.”

Used to be, because thanks to our roughly 400 smell receptors, finding things humans couldn’t smell was a bit of a fool’s errand. Humans, as McGann makes clear in a review paper published today in the journal Science, are actually quite phenomenal sniffers.

If you think this flies in the face of conventional wisdom, you’re not wrong—but conventional wisdom is. You have Paul Broca, a 19th-century anthropologist and anatomist, to thank for the error. Back then Broca, whose name now adorns the region of the brain devoted to speech, had something of a feud with the Catholic Church. They hated him for “pushing atheisms and materialism,” and as a result Broca felt the need to double down on his reductionist view—that is, the idea that any complex phenomenon can be explained by analyzing its simplest mechanisms.

In this case, the mechanism was the olfactory bulb, a brain structure that receives input from sensory neurons devoted to smell located in our nasal passage and sends those signals to the brain. Humans have a smaller olfactory bulb—relative to our brain size—than other animals. When Broca saw that humans not only possess puny olfactory bulbs, but also don’t seem to exhibit certain smell-compelled behaviors (we don’t really run around sniffing each other’s butts the way dogs do) he came to the conclusion that our sense of smell must be lacking. He missed that humans do indeed have some smell seeking behaviors—we tend to sniff our hands after shaking someone else’s hands, for example (whether or not we realize it).

In 1924, McGann notes in his new paper, Charles Herrick wrote in the book Neorological Foundations of Animal Behavior that “the olfactory organs of humans were viewed as ‘greatly reduced, almost vestigial,’ coupled with the idea that ‘the enormously larger apparatus of most other mammals gives them powers far beyond our comprehension.” Even Freud noted that the sense of smell was “usually atrophied in humans.”

And that, friends, is how a misconception becomes a piece of conventional wisdom. Even in the year 2023, most people still think that humans are lousy smellers. But while it is true that our olfactory bulb is smaller relative to the size of our brains than, say, that of rat’s, it’s also true that our olfactory structures are different than those seen in other mammals. Humans have fewer odor receptors, but they’re packed with far more sensory nerve clusters than those seen in rodents. So we do more with the relatively little we have.

Along with some other cerebral differences, these nerve clusters mean we’re not worse smellers than our furry friends. We just smell differently. A dog might be better at detecting urine on a fire hydrant, for example, while a human may be better at detecting the complex notes in a glass of wine. Or at least we think so. As a of yet, no dog has been able to say whether her glass of pinot smelled earthy or if the fruity notes she was detecting were cherry or raspberry.

“People under-appreciate our ability to combine incoming chemicals into single percepts,” said McGann. “If I tell you to imagine the smell of coffee, there’s about 150 different chemicals that come out of coffee into your nose. The combination makes the synthetic percept of coffee. You don’t even have access to the specific pieces, it’s just coffee to you.”

“And if you imagine having a cup of coffee while you’re eating a croissant in a bakery, you’ve got this complicated mixture of smells from the coffee, a complicated mixture of chemicals from the croissant, and it’s all against this very broad complicated background of smells from the bakery, but you can tell them apart,” he added. “Your brain is designed, it seems, to pick out the clusters and combine them into single percepts. We really don’t appreciate how many different chemicals are mixed together to make what we perceive as a smell.”

Not everything you think you know about your nose is false. The popular notion that smell is a tremendous piece of the flavor experience that we get from food is very true. And women tend to be have a better sense of smell in terms of sensitivity, discrimination, and odor detection. It’s a pattern that seems to hold up in rodents, where experiments have removed female rats’ ovaries and found that their sense of smell declined. When scientists removed the testes from male rats, their sense of smell increased.

McGan’s paper includes some other fascinating factoids. While it seems like just about everything we do can damage our sense of hearing, there’s not much you can do to kill your sense of smell—with a few notable exceptions. Smoking is a big smell killer, though you can often regain some of what you’ve lost after quitting. And in 2009, zinc gluconate-containing products designed to be sprayed or swabbed in the nostrils to help fight off colds were found to cause permanent loss of smell.

“There are people who have lost their sense of smell, and it’s traumatic,” said McGan. “It’s upsetting and difficult to lose their sense of smell. And because people kind of shrug and say that smell is not that important in life, there’s sort of a lack of validation and a lack of motivation and interest in finding ways to help. If we can really bust this myth for good, to have everyone understand that smell really is important for humans and it really does influence our lives, then hopefully that will validate the people who are anosmatic [people who can’t smell] and struggling with it. And maybe it’ll help motivate more research or clinical awareness to help those folks.”

How Google Glass Will Influence The Way We Connect On Google+

Social media interaction is in a constant state of evolution. Traditionally, progress has been limited to text-based interactions or static video; however through Google+ Hangouts, users are given the ability to ditch asynchronous social media interactions for real-time communication that delivers the ability to see the audience blink, follow hand and body gestures, eye movements and facial expressions. [pullquote]With Glass, expect commonalities to be out in the open, paving the way for apps to display information on an individual, your mutual connections and any common ground that could be useful in opening a conversation.[/pullquote]

Characteristics like these, only available through platforms that offer real-time, face to face communication, have triggered a shift in traditional social media to a new layer that I call Human Media.

The profound effect of shifting from social media to Human Media inspires raw conversation for a deep level of interaction and understanding of user opinions and feelings. And with the introduction of Google Glass, users are given the possibility of connecting to people and surroundings on a whole new level.

Increased Human Media Interaction

Even with the numerous other group video-chat services, Hangouts are the life-breath of Human Media. Hangouts are scalable, further allowing interaction with multiple parties in real-time while broadcasting to the masses. Glass will be no different.

Glass, mixed with the technology of Hangouts, poses a new reality of improved transparency, in-depth customer service or a customized DIY portal.

Charitable organizations and business have the potential to display firsthand the impacts of customer donations and purchases. After buying a pair of TOMS Shoes, imagine viewing the real-time effects of giving away a pair of shoes to a child in need, all in first person.

Or, from a customer service perspective, Glass opens the door to provide businesses with the ability to walk customers through any issues firsthand. Having trouble installing a new hard drive? Pull out Glass and have a customer service representative walk you through it. Is someone close to you in the need of emergency first aid? Let an experienced professional instruct your every movement.

Converging Social Interaction and Traditional Interaction

Glass has the unique ability to take connections past the screen and into the real world. With Glass, expect commonalities to be out in the open, paving the way for apps to display information on an individual, your mutual connections and any common ground that could be useful in opening a conversation.

To make this possible, researchers at Duke University have created an app for Glass that gives users a visual fingerprint, making it easy to identify an individual. The weakness with this app is that currently clothing is the primary determining factor when identifying a user; however in the near future apps similar to this will make it easy to spot friends, pickup colleagues at the airport or even recognize the seller of an item you may have purchased on Craigslist – all without a recent photo.

Integration into Normal Life

The data gathering properties of Google Glass has already ignited controversy over privacy concerns; however, the capabilities are no different than that of a smartphone. The argument lies in the potential to make data gathering less obvious.

Although critics are voicing opinions over privacy concerns, within two years, the reality of Glass, and competing devices, will become a prominent feature in everyday life. Just like the telephone, cell phone and internet, Google Glass will become a social norm – and possibly the third-half of your brain, as Google front man Sergey Brin said in a 2010 New York Times interview.

What many fail to understand is that Glass will be more than a portal for visual updates. This technology opens the door to alter the way we gather and process information, as well as how Google’s search engine serves information. This is the game-changer that has the potential to inspire a new movement of real-time interaction with users across the globe.

Iot Security And Privacy: An Afterthought?

Security and privacy are widely identified as major concerns for the Internet of Things (IoT), but few people discuss them in any detail.

An exception is Jim Hunter, chief scientist and technology evangelist at Greenwaves Systems, a provider of IoT software and services. Holding several IoT-related patents and a co-chair on the Internet of Things Consortium, he works regularly with the security and privacy concerns that are often acknowledged only in passing.

While security and privacy are often discussed in the same breath, Hunter views them as at least partly separate. According to Hunter, security concerns center on how software and hardware are designed. Too often, security is an afterthought — or as Hunter puts it, “it’s not baked into the product, but is instead sprinkled on top.”

By contrast, he says privacy problems exist “because of the ‘I’ in Iot. “When I put information into my web browser, it brings value to someone else — this is the way that the Internet runs and the agreement we have with it. By keeping ‘Internet’ in front of ‘Internet of Things, we’re enabling companies to think things will continue to work in the same way. Companies are taking your information to the cloud and then using it to make their product(s) better or selling it to other people. The mentality that your data doesn’t have value is where the problem exists.”

Both security and privacy problems could have been foreseen, Hunter continues — and in some larger companies, they were. But smaller companies often overlook them. “The industry itself hasn’t really been educated to the importance of security,” he says, although he adds that “the tide is turning,” partly because of platforms that offer secure infrastructure, such as Parse on Facebook and Fabric on Twitter.

IoT security has “massive” problems, Hunter says. Perhaps not surprisingly, given that Hunter oversees the AXON Platform, which provides a common language for IoT communication, he sees the core of the problem as communication between protocols and resources.

Such a situation can make consistent security in IoT almost impossible. As Hunter observes, one of the worst cases in cloud or Internet computing is the rogue operator — “anybody who is behind a trust, protector [security] barrier who elects to do bad things.” In Iot devices, the barrier may be partly or completely missing, with the result that, “every device that is dropped into a home could potentially be a rogue operator, listening to all of the other traffic and sending it elsewhere or maliciously affecting the flow of traffic.”

By contrast, Hunter regards privacy as either an ethics question about who has rights to the data or a business transaction question about what value a consumer gets in exchange for the data.”

Until such questions are answered, questions of implementation, such as what information IoT devices should collect, cannot be meaningfully answered. For example, if privacy is an ethical issue, the question of what information IoT devices should collect depends on what consumers are comfortable with. “One of the most ethical ways [to decide],” Hunter says, “is through an agreement –‘you will share this, and in exchange get something of value.’”

If, however, the question is answered in terms of a financial transaction, then the answer becomes what Hunter describes as a data pixel problem — that is, one in which, like an image in .png or .jpeg format, individual pieces of data carry little meaning. Yet, when combined with other individual pieces, form a far more significant picture. Since individuals rarely see the whole picture, in such cases, an informed decision of what information to share is much more difficult. For instance, the fact that you were late paying a credit card one month might seem inconsequential by itself, yet it might be combined with other times when you were also late to create a picture of you as irresponsible with credit.

IoT security and privacy issues, in the abstract, are little different from those on the Internet or in the cloud. The difficulty is that they potentially occur with every smart device, and solutions need to strike a balance between security and privacy on the one hand, and usability on the other become that much more complex.

For example, Hunter says, “if [a smart device] is super-secure and challenges me for a password every time I try to do something, there’s a conflict there. Usability vs. security needs to trade off — there is always a balance between being usable and secure, and you have to find the right mix.”

Organizations like the Internet of Things Consortium do their best to educate people about such issues, but, at a time when many struggle with the idea of secure passwords, informing everyone is obviously an uphill battle.

A large part of the difficulty,” Hunter suggests, “is that we haven’t fully established the value of data. As the value of data comes more into view and the data pixels and ownership of the data becomes more of a thing, I think that is going to become a bigger issue.” And even when such questions are resolved, others lie in wait, such as the degree of complicity a manufacturer should have when its lack of security enables a criminal act. Clearly, the conversation about IoT security and privacy is still at an early stage, even as the size of the IoT increases.

Update the detailed information about Do You Think Google Glass Is An Intrusion Of Privacy? on the Kientrucdochoi.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!