p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: center; font:

p.p1 {margin: 0.0px 0.0px 0.

0px 0.0px; text-align: center; font: 12.0px ‘Helvetica Neue’; color: #000000; -webkit-text-stroke: #000000}p.p2 {margin: 0.0px 0.0px 0.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

0px 0.0px; text-align: center; font: 12.0px ‘Helvetica Neue’; color: #000000; -webkit-text-stroke: #000000; min-height: 14.0px}p.p3 {margin: 0.0px 0.0px 0.

0px 0.0px; font: 12.0px ‘Helvetica Neue’; color: #000000; -webkit-text-stroke: #000000}p.

p4 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px ‘Helvetica Neue’; color: #000000; -webkit-text-stroke: #000000; min-height: 14.0px}span.s1 {font-kerning: none}Lancaster UniversityThe Internet of Things in our Household – Will it go too Far?Elijah WongLICA 240Dr.

Emmanuel TseklevesDecember 2017Word Count: Page Count: 6The Internet of Things in our Household – Will it go too Far?In this essay i will discuss the current state of Internet of Things devices in our household and the different methods of interactions, I will examine the changes and evolutions that the interactions have made from the past and how I see the future turning.current interaction through phones. future?Perhaps one of the earliest devices that we can consider to be connected could be traced back to the 1970s, the internet of things was originally called “embedded internet” or “pervasive computing” in relation to a new invention called radio-frequency identification, better known as RFID. However, the first device created that created a connection between hardware and a user was a coke machine modified by programmers at Carnegie Mellon University. Programmers could connect to the machine over the Internet, check the status of the machine, and determine whether there would be a cold drink waiting for them. Researchers then moved the devices into the larger consumer market by allowing controls of everyday items in our households like thermostats, refrigerators, microwaves, ovens, televisions, and lights.

Basically, if it is not a computer, smartphone or tablet, and it connects to the Internet, it can be called an information of technology device.There are many Internet of Technology devices that we use on a daily basis that we might not even realize. Devices like satellite navigation in our vehicles, fit bits, garage doors these are all controlled via devices connected to the internet. To really understand how internet of things devices has entered our household, we must understand what the concept of a smart home is. The concept of a smart home, at one time only encountered in science fiction, has moved closer to realization over the last ten years. Although the gap between reality and fantasy is still wide, it is important that we start to give proper consideration to the implications this technology holds for the way we will live in our homes in the future . They offer us a branch of ubiquitous computing that involves incorporating smartness into dwellings for comfort, healthcare, safety, security, and energy conservation. Smart homes offer a better quality of life by introducing automated appliance control and assistive services.

They optimize user comfort by using context awareness and predefined constraints based on the conditions of the home environment . The question is, how do we do control and interact with all of these devices?There are currently many different methods of controlling these internet of things devices, today we can use tablets, computers, smartphones and even smart devices with microphones that wait and listen for our audible commands. Technological research and innovation in the past few years have allowed for tremendous growth in this space. The adaptation of new technologies such as cloud computing, touch-sensitive surfaces, augmented reality, gestural interfaces, sensors, and virtual reality has transformed both product design and product branding. At the same time Wi-Fi modules, cloud service providers, chips, and other tools needed to connect products to the Internet have come down the cost curve to enable the continued growth of the information of technology market . These lowered costs allows the consumer to implement many different sensors which can connect to many different systems of smart devices in households and unobtrusively monitor and learn from our patterns and habits.The lowered cost of bringing these devices to consumers has allowed for these markets to grow and to disrupt the market. The one thing which is common with most information of technology devices in the early adoption stages was the requirement of a user interface for it to function.

A user interface was found on laptops, tablets or a mobile devices which was a easy and inexpensive way for the manufacturers of these devices to connect to their devices as we all have these products in our home. It however was not a great way for us to interact with them as it required the use of our laptops or our phones to even complete the most basic of tasks such as turning off the lights, or to rotate a dial to change the temperature. Not exactly a frictionless process that many user experience or interaction designers hope for.Ben Shneiderman a researcher specializing in human-computer interaction came up with a theory called direct manipulation. Objects that the user would want to manipulate are visible and can be immediately be acted upon, whether it is physical actions or pointing instead of complex syntax that receive immediate feedback. This strategy can lead to user interfaces that are clear, graspable, foreseeable  and controllable. Information of Technology creates opportunity for interactions that may arise in the future or remotely.

To control things that happen in the future, information of technology devices must be able to anticipate the users future needs and react to the desired behaviour into a set of logical conditions and actions. Blackwell points out, this is basically programming. It is a much harder task than a simple, direct interaction.

That’s not necessarily a bad thing, but it may not be appropriate for all users or all situations. This impacts usability and accessibility.This brings us to one of the most significant technological improvements, Machine learning. Most people have encountered machine learning, it can be found controlling and optimizing our traffic and public transportation systems, checking for plagiarism in our handed in papers and in our day to day use of the internet. These are all applications for major multi million dollar industries, however through lots of research and development, iIt has now become affordable and easy enough to bring to consumers.Perhaps one of the best devices we can use to explain and describe machine learning is a nest or a smart thermostat that are becoming increasingly more and more common in our households. In fact a think tank supported by the supported by the UK Cabinet office stated that “The heating controls in many UK households are outdated and difficult to use, with behavioural factors often leading to suboptimal heating schedules.

For example, heating is often left on when homes are empty or kept unnecessarily high at night…. Smart heating controls have the potential to reduce gas usage, lower carbon emissions, and save households money by automating and simplifying the user-experience, and using sensors and machine learning to encourage users toward more efficient heating schedules.”  The nest thermostat learns your preferred temperature in the house, while also tracking your GPS position via the mobile app and knows when you are returning back home to optimize the household temperature. The thermostat slowly learns your daily routines and weekly schedule, which it then uses to optimize the temperature.

The optimization allows for the system to turn off the heating or cooling.This brings us to the smart home we all had in mind, where self learning machines can predict and anticipate our every desire. We no longer have to use a clunk user interface to determine what we want.Another example of current devices that have revolutionized the way we interact with information of technology devices are the smart devices like Amazon Alexa and Google home. These are smart devices with active microphones that can understand and provides a conversational interface to supported devices. These devices allows for a non traditional interaction using audible commands versus using an user interface. When we look at these devices from an interaction design viewpoint we must look at both the pros and cons of such a device. Voice control can be useful for many people, especially users with physical limitations — such as the elderly and the physically disabled.

Users that cannot easily interact with traditional lights for example, can simply speak to these microphones. However there are still many disadvantages that we must take into account. Many of these devices are not actively listening for commands, thus users must physically push a button to trigger listening mode. Also while feedback is getting better, there is still a massive time delay for the responses to return to the users leaving them questioning whether or not their command has been sent. This leads us to the privacy concern we must scrutinize with audio command devices.

Units such as the Google Home and Alexa have active microphones constantly analyzing and learning everything we say. Earlier this summer Amazon was considering giving app developers access to Alexa audio recordings . Amazon responded stating that “We do not share customer-identifiable information to third-party skills apps without the customer’s consent.”.

While in theory this is nice, customers consent are normally hidden in terms of use conditions that realistically most users never read. Now that we’ve looked into the current offerings in the market we must look into the future. I believe that the future of Information of Technology offerings in the household will surround augmented reality paired with wearable technology. Currently augmented reality paired with wearable technology consists of mainly the google glass prototype. Google glass was first released in 2013.

It entered the marked in trial phases to select google customers and was considered a remarkable success in disrupting the market as there was nothing in the market which could compete with its “lightweight design”. While it succeeded in disrupting the market, it failed in its functionality. The google glasses could only take photos and display information, there was virtually no upside as users had to connect it to a mobile device which could also display the information with a better and easier user interaction.Augmented reality does not really exist in the market today, mostly due to the bulkiness of the size of the equipment.

The processing and memory power required to run such complex code required for larger batteries as well at times tethering into a nearby computer or laptop. With the growing capability of cloud computing and machine learning, we will soon be able to remove the requirements of onboard hardware on augmented reality devices by simply connecting it via its data connection to servers off site. This can allow for applications in many different sectors including health care. With the new interest and demand in the sensor market, we are starting to see more companies invest into and produce more accurate as well as more variety of sensors. Recently, a few breakthroughs in the electroencephalography (EEG) sensors we are now able to start sensing and tracking the electrical activity inside a person’s brain.

Now this technology was not just discovered, it has been in use in hospitals as a diagnosis tools for physicians for decades. However it has not been available to consumers at an affordable price point till now, these devices like the OpenBCI a 3D printed device can be made for as low as $100. So how do these EEG sensors work? Electrodes are placed on specific parts of your scalp. These locations are defined by the 10-20 system an international standard which ensures the ability to replicate results among different patients.

One of the leading devices in this sector is the Epoc by emotiv systems, it is an affordable device which can be purchased for as low as $700. While these devices might not be able to “mind read” just yet, they have been proven in practical use by allowing amputees to map out brain waves to control their robotic prosthetic limbs. As these devices get increasingly more affordable to manufacture due to economies of scale we will start seeing these devices into the early adopter stages. While these EEG sensors are a great idea, they do have their flaws, most electronic devices that are placed near the sensors create electrical interference which can confuse these sensors. More importantly, EEG scans are normally conducted in a controlled environment with sensors placed onto bare skin with contaminants like body oil removedSources