Why the Internet of Things should become the ‘Internet of Transparency’

Algorithms are essential to the Internet of Things.

devices connected to the autopilot of our cars; home light, heat and security control; and shop us. Wearable devices monitor our heart rates and oxygen levels, tell us when to wake up and how to move and keep detailed records of our whereabouts. Smart cities, powered by a suite of Internet of Things devices and applications, control the lives of millions of people around the world through traffic routing, sanitation, public administration and security. The arrival and impact of the Internet of Things in our daily lives would not be conceivable without algorithms, but how much do we know about the function of algorithms, logic, and security?

Most algorithms operate at computational speeds and complexities that prevent effective human review. They work in a black box. Moreover, most IoT application algorithms are proprietary and operate in a double black box. This status quo may be acceptable if the results are positive, and the algorithms do no harm. Unfortunately, this is not always the case.

When black box algorithms go bad and cause physical, physical, social or economic harm, they also harm IoT traffic. Mistakes like these are eroding the social and political trust that the industry needs to ensure broader adoption of smart devices, which is key to moving forward in the field.

Opaque algorithms can be expensive, even deadly

Black box algorithms can lead to major problems in the real world. For example, there’s a nondescript stretch of road in California’s Yosemite Valley that constantly confuses self-driving cars, and right now, we still don’t have an answer as to why. The naturally open road is full of dangers and dangers, but what about your home? Intelligent assistants are there to listen to your voice and fulfill your desires and commands regarding shopping, heating, security and any other home feature that lends itself to automation. However, what happens when the smart assistant starts acting stupid and does not listen to you, but to the TV?

There’s an anecdote going around on the web about several clever home assistants who initiate unwanted online purchases because Jim Patton, the CW6 News host in San Diego, uttered the phrase “Alexa ordered me a doll.” Whether this happened on such a large scale is off topic. The real problem is that the dollhouse incident seems very plausible and, once again, raises doubts about the inner workings of the IoT devices to which we are so entrusted with our daily lives, comfort and security.

From an IoT perspective, the intangible damage caused by such events is significant. When a self-driving vehicle breaks down, all self-driving vehicles have a bad reputation. When a smart home assistant does stupid things, the intelligence of all smart home assistants comes into question.

Data elephant in the room

Every time the algorithm makes a wrong decision, its providers promise a thorough investigation and quick correction. However, due to the proprietary and profitable nature of all these algorithms, the authorities and the public have no way of verifying which improvements have occurred. Ultimately, we must take the companies at their word. Repeated insults make this question difficult.

One of the main reasons companies don’t reveal the inner workings of their algorithms – to the extent that they can understand them – is that they don’t want to show all the operations they run with our data. Self-driving cars keep detailed records of every trip. Home assistants keep track of activities around the house; Recording temperature, light and volume settings; And the shopping list is constantly updated. All of this personal information is centrally collected to allow algorithms to learn about the information and flow it to targeted ads, detailed consumer profiles, behavioral alerts and direct manipulation.

Think of the time Cambridge Analytica weaponized 87 million unsuspecting users of social media profile information to mislead voters and could have helped upend an entire US presidential election. If a list of your friends and some online discussion groups were enough for an algorithm to determine the best ways to influence your beliefs and behaviors, what level of deeper and more powerful manipulation could detailed records of your heart rate, movement patterns, and sleep allow?

Companies have a vested interest in keeping algorithms opaque as this allows them to fine-tune them for their profit purposes and compile massive central databases of sensitive user data along the way. As more and more users wake up to this painful but necessary realization, the adoption and development of the Internet of Things is slowly coming to a halt, and doubt is building a mountain in front of the computational progress that never was. what shall we do?

Go to “Internet Transparency”

The most immediate focus should be on making what algorithms do more understandable and transparent. To maximize trust and eliminate the negative effects of algorithmic obfuscation, the Internet of Things must become the “internet of transparency.” The industry can create transparency by decoupling AI from central data collection and making as many open source algorithms as possible. Technologies such as persuasive federal learning and modern artificial intelligence enable these positive steps. We need the will to go after them. It won’t be easy, and some big tech companies won’t collapse without a fight, but we’d all be better off on the other side.

About the author

Leif-Nissen Lundbæk is the co-founder and CEO of beautiful. His work mainly focuses on algorithms and applications of artificial intelligence to maintain privacy. In 2017, he founded a privacy technology company with Professor and Director of Research Michael Huth and COO Felix Hamann. The Xayn mobile app is a private internet search and discovery browser – combining a search engine, discovery feed, and mobile browser with a focus on privacy, customization, and intuitive design. Winner of Porsche’s first innovation competition, the Berlin-based AI company has worked with Porsche, Daimler, Deutsche Bahn and Siemens.