Image Image Image Image Image Image Image Image Image

Blog

08 Jun

By

“View Source: Shenzhen” is now out

June 8, 2017 | By |

View Source: Shenzhen cover

Executive Summary: We went to Shenzhen to explore opportunities for collaboration between European Internet of Things practitioners and the Shenzhen hardware ecosystem—and how to promote the creation of a responsible Internet of Things. We documented our experience and insights in View Source: Shenzhen.

Download View Source: Shenzhen as a as a PDF (16MB) or…
read it on Medium.

View Source is the initiative of an alliance of organizations that promote the creation of a responsible Internet of Things:

  • The Incredible Machine is a Rotterdam-based design consultancy for products and services in a connected world.
  • The Waving Cat is a Berlin-based boutique strategy, research & foresight company around the impact and opportunities of emerging technologies.
  • ThingsCon is a global community of practitioners with the mission to foster the creation of a responsible & human-centric IoT.
  • Mozilla Foundation’s Open IoT Studio aims to embed internet stewardship in the making of meaningful, connected things.

Along for part of the ride were two other value-aligned organizations:

  • Just Things Foundation aims to increase the awareness about ethical dilemmas in the development of internet connected products and services.
  • ThingsCon Amsterdam organizes the largest ThingsCon event globally, and also organized a guided delegation of European independent IoT practitioners to Shenzhen which coincided with our second Shenzhen trip.

What unites us in our efforts is great optimism about the Internet of Things (IoT), but also a deep concern about the implications of this technology being embedded in anything ranging from our household appliances to our cities.

About this document

This document was written as part of a larger research effort that included, among other things, two trips to Shenzhen, a video documentary, and lots of workshops, meetings, and events over a period of about a year. It’s part of the documentation of these efforts. Links to the other parts are interspersed throughout this document.

This research was a collaborative effort undertaken with the Dutch design consultancy The Incredible Machine, and our delegations to China included many Dutch designers, developers, entrepreneurs and innovators: One of the over-arching goals of this collaboration was to build bridges between Shenzhen and the Netherlands specifically—and Europe more generally—in order to learn from one another and identify business opportunities and future collaborations.

Creative Industries Fund NL
We thank the Creative Industry Fund NL for their support.

*Please note: While I happen to be the one to write this text as my contribution to documenting our group’s experiences, I cannot speak for the group, and don’t want to put words in anyone’s mouth. In fact, I use the “we” loosely; depending on context it refers to either one of the two delegations, our lose alliance for responsible IoT, or is a collective “we”. I hope that it’s clear in the context. Needless to say, all factual errors in this text are mine, and mine alone. If you discover any errors, please let me know.

01 Jun

By

For IoT, we need a holistic understanding of security

June 1, 2017 | By |

Like the internet, IoT is a big horizontal layer of technologies and practices. It has touch points across industries (like healthcare, automotive, consumer goods, infrastructure) and regulatory areas. That’s what makes it so hard to discuss, to regulate, and to make secure.

More importantly, security has a pretty clear meaning in IT. But I’d argue that for the Internet of Things we need a more holistic concept of security than for traditional IT—one that includes aspects like data protection, privacy, user rights. A more human rights-style that goes beyond pure security and extends protection into adjacent but equally important areas.

Otherwise even the most technologically secure systems won’t serve the purpose of protecting users from negative consequences.

29 May

By

Monthnotes for May 2017

May 29, 2017 | By |

May was AI month at Casa The Waving Cat. Also, #iotlabels. Also, #thingscon.

Read More

24 May

By

Impact and questions: An AI reading list

May 24, 2017 | By |

As part of some research into artificial intelligence (AI) and machine learning (ML) over the past few months, I’ve come across a lot of reading material.

Here’s some that stood out to me and that I can recommend looking into. Please note that this is very much on the non-technical end of the spectrum: Primers, as well as pieces focusing on societal impact, ethics, and other so-called “soft” aspects, i.e. societal, political, humanitarian, business-related ones. These are the types of impact I’m most interested in and that are most relevant to my work.

The list isn’t comprehensive by any means—if you know of something that should be included, please let me know!—but there’s a lot of insight here.

Enjoy!

Basics, primers:

Resources, reading lists, content collections:

Books:

Articles:

Reports, studies:

Presentations, talks:

Fiction:

For completeness’ sake (and as a blatant plug) I include three recent blog posts of my own:

24 May

By

First steps with AI & image recognition (using TensorFlow)

May 24, 2017 | By |

After reading the excellent O’Reilly book/essay collection What is Artificial Intelligence? by Mike Loukides and Ben Lorica, I got curious—and, finally, emboldened enough—to give get my hands dirty with some n00b level AI and machine learning.

Pete Warden’s Tensorflow for Poets, part of Google Code Lab, seemed like a logical starting point for me: My coding skills are very basic (and fairly dismal, tbh), and this is technically way beyond my skill level and comfort zone. But I feel confident that with a bit of tutorial-based hand-holding I can work my way through the necessary command-line action. Then, later, I can take it from there.

For this first time I would stick to the exact instructions, line by line, mostly by copy & paste. It’s not the deepest learning curve that way but it helps me to walk through the process once before then changing things up.

So, basic setup. I won’t include links here as they’re updated and maintained over on Tensorflow for Poets.

Get Docker up and running

Docker creates a Linux virtual machine that runs on my MacBook Pro. This created a first small bump, which after some reading up on Docker configuration turned out to have the oldest solution of all of tech: Relaunch the Docker app. Boom, works.

Get Tensorflow installed & download images for training set

As I continued the setup and installed Tensorflow as per instructions, there was some downtime while the system was downloading and installing.

The tutorial suggests experimenting with the Tensorflow Playground. Which is great, but I’d done that before. Instead, I decided to prepare my own set of images to train the Inception model on later. (After first following the tutorial exactly, including using their flower-based training image set.)

The training set consists of flower photos for five different types of flowers, and a few hundred photos each. This might take a while.

First round of (re)training: Inception

The Inception network (v3, in this case) is a pre-trained Tensorflow network which we can re-train on our own images. It’s a tad over-powered for what we need here according to our tutorial: “Inception is a huge image classification model with millions of parameters that can differentiate a large number of kinds of images. We’re only training the final layer of that network, so training will end in a reasonable amount of time.”

Inception downloads and goes to work. This is my cue: I go have lunch. It might take up to 30 minutes.

Half an hour later I’m back. I’ve had lunch, the Roomba has cleaned the kitchen. The training was done.

Final test accuracy = 91.4% (N=162)

Train your own

Now it was time for me to take it to the next level: Put Tensorflow to work on my own image training set. I decided to go with a few members of the ThingsCon family. Iskander, Marcel, Max, Monique, Simon, and myself: 6 people total, with around 10-20 photos of each.

Now, these photos are mostly from conferences and other ThingsCon-related activities: During our summer camp and our Shenzhen trip. I added some personal ones, too.

A bunch are really horrible photos I included to really test the results: In addition to a tiny sample of training images, some are really hard to discern even for human eyes. (There’s one that contains only a small part of Max’s face, for example—his gorgeous giant blond beard, but nothing else.) Lots are group pics. Many contain not just one but two or more of the people in this sample. These are hard images to train on.

Let’s see how it goes. I swap out the folders and files and run Inception again.

ZeroDivisionError

I had been warned about this. If a sample is too tiny, the network sometimes can’t handle it. We need for pics! I pull a few more from personal files, a few off of the web. Now it’s just over 20 images per “category”, aka person. Let’s try this again.

ZeroDivisionError

Still no luck. My working theory is that it’s too many photos with several of the yet-to-learn people in them, so the results are ambiguous. I add more pics I find online for every person.

I don’t want to make it too easy though, so I keep adding lots of pics in super low resolution. Think thumbnails. Am I helping? Probably not. But hey, onwards in the name of science!

Going back through the training set I realize just how many of these pics contain several of the yet-to-learn categories. Garbage in, garbage out. No wonder this isn’t working!

Even something as simple as this drives home the big point of machine learning: It’s all about your data set!

I do some manual cropping so that Inception has something to work with. A clean data set with unambiguous categories. And voilà, it runs.

Now, after these few tests, I snap two selfies, one with glasses and one without.

The output without glasses:

peter (score = 0.66335) max (score = 0.14525) monique (score = 0.07219) simon (score = 0.05728) marcel (score = 0.04428) iskander (score = 0.01765)

The output with glasses:

peter (score = 0.75252) max (score = 0.12352) simon (score = 0.05971) monique (score = 0.04001) marcel (score = 0.01397) iskander (score = 0.01027)

Interestingly, with glasses the algorithm recognizes me better even though I don’t wear any in the other images. Mysterious, but two out of two. I’ll take it!

How about accuracy?

The tests above are the equivalent of a “hello world” for machine learning: The most basic, simple program you can try. They use the Inception network that’s been built and trained for weeks by Google, and just add one final layer on top, to great effect.

That said, it’s still interesting to look at the outcomes, and which factors influence the results. So let’s run the same analysis for 500 iterations compared to, say, 4.000!

The test image I use is a tricky one: It’s of Michelle, a hand in front of her face.

500 iterations on a set of photos (this time, of family members):

michelle (score = 0.53117)

This isn’t the result of a confident algorithm!

So for comparison, let’s see the results for 4.000 iterations on the same training set:

michelle (score = 0.75689)

Now we’re talking!

At this point I’m quite happy with the results. For a first test, this delivers impressive results and, maybe even more importantly, is an incredible demonstration of the massive progress we’ve seen in the tooling for machine learning over the last few years.

18 May

By

Some thoughts on Google I/O and AI futures

May 18, 2017 | By |

Google’s developer conference Google I/O has been taking place these last couple of days, and oh boy has there been some gems in CEO Sundar Pichai’s announcements.

Just to get this out right at the top: Many analysts’ reactions to the announcements were a little meh: Too incremental, not enough consumer-facing product news, they seemed to find. I was surprised to read and hear that. For me, this year’s I/O announcements were huge. I haven’t been as excited about the future of Google in a long, long time. As far as I can tell, Google’s future today looks a lot brighter still than yesterday’s.

Let’s dig into why.

Just as a quick overview and refresher, some of the key announcements (some links to write-ups included).

Let’s start with some announcements of a more general nature around market penetration and areas of focus:

  • There are now 2 billion active Android devices
  • Google Assistant comes to iOS (Wired)
  • Google has some new VR and AR products and collaborations in the making, both through their Daydream platform and stand-alone headsets

Impressive, but not super exciting; let’s move on to where the meat is: Artificial intelligence (AI), and more specifically machine learning (ML). Google announced a year ago to turn into an AI-first company. And they’re certainly making true on that promise:

  • Google Lens super-charges your phone camera with machine learning to recognize what you’re pointing the camera at and give you context and contextual actions (Wired)
  • Google turns Google Photos up to 11 through machine learning (via Lens), including not just facial recognition but also smart sharing.
  • Copy & paste gets much smarter through machine learning
  • Google Home can differentiate several users (through machine learning?)
  • Google Assistant’s SDK allows other companies and developers to include Assistant in their products (and not just in English, either)
  • Cloud TPU is the new hardware that Google launches for machine learning (Wired)
  • Google uses neural nets to design better neural nets

Here’s a 10min summary video from The Verge.

This is incredible. Every aspect of Google, both backend and frontend, is impacted by machine learning. Including their design of neural networks, which are improved by neural networks!

So what we see there are some incremental (if, in my book, significant) updates in consumer-facing products. This is mostly feature level:

  • Better copy & paste
  • Better facial recognition in photos (their “vision error rate of computer vision algorithms is now better than the human error rate”, says ZDNet)
  • Smarter photo sharing (“share all photos of our daughter with my partner automatically”)
  • Live translation and contextual actions based on photos (like pointing camera at wifi router to read login credentials and log you into the router automatically).
  • Google Home can tell different users apart.

As features, these are nice-to-haves, not must-haves. However, they’re powered by AI. That changes everything. This is large-scale deployment of machine learning in consumer products. And not just consumer products.

Google’s AI-powered offerings also power other businesses now:

  • Assistant can be included in third party products, like Amazon’s Alexa. This increases reach, and also the datasets available to train the machine learning algorithms further.
  • The new Cloud TPU chips, combined with Google’s cloud-based machine learning framework around TensorFlow means that they’re not in the business of providing machine learning infrastructure: AI-as-a-Service (AssS).

It’s this last point that I find extremely exciting. Google just won the next 10 years.

The market for AI infrastructure—for AI-as-a-Service—is going to be mostly Google & Amazon (who already has a tremendous machine learning offering). The other players in that field (IBM, and maybe Microsoft at some point?) aren’t even in the same ballpark. Potentially there will be some newcomers; it doesn’t look like any of the other big tech companies will be huge players in that field.

As of today, Google sells AI infrastructure. This is a mode that we know from Amazon (where it has been working brilliantly), but so far hadn’t really known from Google.

There haven’t been many groundbreaking consumer-facing announcements at I/O. However, the future has never looked brighter for Google. Machine learning just became a lot more real and concrete. This is going to be exciting to watch.

At the same time, now’s the best time to think about societal implications, risks, and opportunities inherent in machine learning at scale: We’re on it. In my work as well as our community over at ThingsCon we’ve been tracking and discussing these issues in the context of Internet of Things for a long time. I see AI and machine learning as a logical continuation of this same debate. So in all my roles I’ll continue to advocate for a responsible, ethical, human-centric approach to emerging technologies.

Full disclosure: I’ve worked many times with different parts of Google, most recently with the Mountain View policy team. I do not, at the time of this writing, have a business relationship with Google. (I am, however, a heavy user of Google products.) Nothing I write here is based on any kind of information that isn’t publicly available.

29 Apr

By

Are we the last generation who experienced privacy as a default?

April 29, 2017 | By |


Attack of the VR headsets! Admittedly, this photo has little to do with the topic of this blog post. But I liked it, so there you go.

The internet, it seems, has turned against us. Once a utopian vision of free and plentiful information and knowledge for all to read. Of human connection. Instead, it has turned into a beast that reads us. Instead of human connection, all too often we are force-connected to things.

This began in the purely digital realm. It’s long since started to expand into the physical world, through all types of connected products and services who track us—notionally—for our own good. Our convenience. Our personalized service. On a bad day I’m tempted to say we’ve all allowed to be turned into things as part of the the internet of things.

///

I was born in 1980. Just on the line that marks the outer limit of millenial. Am I part of that demographic? I can’t tell. It doesn’t matter. What matters is this:

Those of us born around that time might be the last generation that grew up who experienced privacy as a default.

///

When I grew up there was no reason to expect surveillance. Instead there was plenty of personal space: Near-total privacy, except for neighbors looking out of their windows. Also, the other side of that coin, near total boredom—certainly disconnection.

(Edit: This reflects growing up in the West, specifically in Germany, in the early 1980s—it’s not a shared universal experience, as Peter Rukavina rightfully points out in the comments. Thanks Peter!)

All of this within reason: It was a small town, the time was pre-internet, or at least pre-internet access for us. Nothing momentous had happened in that small town in decades if not centuries. There it was possible to have a reasonably good childhood: Healthy and reasonably wealthy, certainly by global standards. What in hindsight feels like endless summers. Nostalgia past, of course. It could be quite boring. Most of my friends lived a few towns away. The local library was tiny. The movie theater was a general-purpose event location that showed two movies per week, on Monday evenings. First one for children, than one for teenagers and adults. The old man who ticketed us also made popcorn, sometimes. I’m sure he also ran the projector.

Access to new information was slow, dripping. A magazine here and there. A copied VHS or audio tape. A CD purchased during next week’s trip to the city, if there was time to browse the shelves. The internet was becoming a thing, I kept reading about it. But until 1997, access was impossible for me. Somehow we didn’t get the dialup to work just right.

What worked was dialing into two local BBS systems. You could chat with one other person on one, with three in the other. FIDO net made it possible to have some discussions online, albeit ever so slowly.

///

When I grew up there was no expectation of surveillance. Ads weren’t targeted. They weren’t even online, but on TV and newspapers. They were there for you to read, every so often. Both were boring. But neither TVs nor newspapers tried to read you back.

///

A few years ago I visited Milford Sound. It’s a fjord on the southern end of New Zealand. It’s spectacular. It’s gorgeous. It rains almost year round.

If I remember a little info display at Milford Sound correctly, the man who first started settling there was a true loner. He didn’t mind living there by himself for decades. Nor, it seems, when the woman who was to become his wife joined. It’s not entirely clear how he liked that visitors started showing up.

Today it’s a grade A tourist destination, if not exactly for mass tourism. It looks and feels like the end of the world. In some ways, it is.

As we sought shelter from the pouring rain in the boat terminal’s cafeteria, our phones had no signal. Even there, though, you could connect to the internet.


Connectivity in Milford Sound comes at a steep price

Internet access in Milford Sound is expensive enough that it might just suffice to stay offline for a bit. It worked for us. But even there, though they might be disconnected, the temps who work there during tourist season probably don’t get real privacy. On a work & travel visa, you’re likely to live in a dorm situation.

///

The internet has started to track every move we make online. I’m not even talking about governance or criminal surveillance. Called ad tech, online advertisements that track your every move notice more about you than you about them. These are commercial trackers. On speed. They aren’t restricted to one website, either. If you’ve ever searched for a product online you’ll have noticed that it keeps following you around. Even the best ad blockers don’t guarantee protection.

Some companies have been called out because they use cookies that track your behavior that can’t be deleted. That’s right, they track you even if you explicitly try to delete them. Have you given your consent? Legally, probably—it’s certainly hidden somewhere in your mobile ISP’s terms of service. But really, of course you haven’t agreed. Nobody in their right mind would.

///

Today we’re on the brink of taking this to the the next level with connected devices. It started with smartphones. Depending on your mobile ISP, your phone might report back your location and they might sell your movement data to paying clients right now. Anonymized? Probably, a little. But these protections never really work.

Let’s not but let’s be very deliberate about our next steps. The internet has brought tremendous good first, and then opened the door to tracking and surveillance abuse. IoT might go straight for the jugular without the benefits – if we make it so. If we allow to let that happen.

///

The internet, it seems, has turned against us. But maybe it’s not too late just yet. Maybe we can turn the internet around, especially the internet of things. And make it work for all of us again. The key is to reign in tracking and surveillance. Let’s start with ad tech.