Robot Keggers and Roomba Spies

Or, How Do We Know Our Agents are Really Ours?

Chris Noessel
12 min readMar 2, 2018

Recently I was reading some angry op-eds. The authors were upset that, if you order a product through Amazon Alexa without specifying a brand, it will pick one for you, and that one will be favorable to Amazon. I guess this might come as a surprise if you’re not used to thinking about agents, but in publishing Designing Agentive Technology: AI That Works for People (Rosenfeld 2017), I’ve thought about them a lot, and it comes as absolutely no surprise to me. In fact, it’s going to get worse. Or at least more complicated. Let me explain.

It goes back to the 1970s when two different pairs of authors (Ross-Mitnick or Jensen-Meckling) formalized something called the Principal-Agent problem. In this article I’m going to tell you what it looked like then, how it’s going to change, and what it will mean for us in the future.

The classic problem: Principal and a (Human) Agent

If one person pays another to do something, but the work is done out of sight, the employer is left wondering whether the employee is doing the work strictly in the way the employer wants, or doing the work in a way that serves some of their own self-interest. The economics papers call the employer a principal and the employee an agent, because she is acting on behalf of the principal.

Let’s look at the example of trucking. (For purposes of discussion let’s turn the clock back to a bygone era when trucks weren’t GPS-monitored around the clock.) June’s employer, Tricky Trucking, wants all its deliveries as soon as possible. It hires June to haul a load from San Francisco to New York City. En route, June realizes she’ll be near her cousin John in Wyoming, and stops off to catch up with him, shoot the breeze, and grab a beer. When she arrives at New York, her supervisor Tom notes that she took 4 hours longer than average and asks why. June says there were terrible traffic jams, queues at gas stations, and some weather, and Tom pretty much just has to accept it. Both Tom and Tricky Trucking know that June might be lying, but can’t do much about it.

If June is paid just when she delivers goods, sure, her visit with John might cost her a little because she has less time to start a next haul, but she also gets to see her cousin, so it’s worth it to her. Her self-interest is there right there alongside of her employer’s interests when she makes decisions about how she will behave. Tricky Trucking, on the other hand, gets nothing from the detour. This “waste” is called agency cost, and employers and their economist friends have two ways they’ve gone about trying to minimize it.

The first is to try and increase surveillance. Add GPS monitors to the truck. Install cameras in the cabs. Track the actual traffic and weather affecting the truck in real time. Now Tricky has more data and can challenge June about the detour. What weather are you talking about? We show no traffic around you, etc. Surveillance works, but makes June feel like she’s not trusted, and forces Tricky to monitor her behavior and negotiate infractions, which is, itself, costly. So while this has gone full panopticon in the real world, it’s less satisfying to the principal, the agent, and even the economists who want something they can talk about.

Which brings us to the second method, and that’s to align incentives of the principal and the agent. In this scenario, Tricky offers a bonus to June if she can get there at a certain time. This bonus shrinks the longer she takes past that time. Now June has an incentive to skip her catch-up with John (or at least ask him to come meet her at one of her scheduled breaks) and get the haul to the dock as soon as possible. She’ll even be more willing to be vigilant and proactive about avoiding problems like weather and traffic, because they hit her in the pocket book. Her self-interest is now aligned with Tricky’s interest, and this helps reduce Tricky’s agency costs. Tricky just has to be smart about making sure that the bonus that is offered is at the sweet spot: large enough to provide motivation, small enough that it’s still worth doing business, and at a shrink rate that doesn’t cause June to just give up if she’s running late. They also have to ensure that meeting the bonus is a challenge, but neither illegal nor so challenging that agents become discouraged. That’s not an easy spot to identify, giving economists lots of sweet, sweet job security.

End of the classic problem

But, notably, this classic version of the agency dilemma is about an employer and employee. When I speak of agents in my book I’m talking about technological agents, like Roombas, self-driving cars, Spotify, and yes, Alexa. You might think that robots like Roombas or self-driving cars lack the self-interest of June, but they are retailed AIs. Their behavior happens in a way that retailers can hide their own self-interest. The principal is now a user, and things are going to get more complicated.

The more complicated problem: Principal (a User), a (Technological) Agent, and a Retailer

(Or the Principal-Agent-Retailer problem)

In my book I use a similar definition of “agent” to economists; that is, a narrow AI that acts on behalf of its principal (or user) out of sight, while that principal’s attention is on other things. For instance, the Roomba sweeps up while you are at work. The self-driving car manages its route while you check messages or write articles. Spotify selects the next song for you while you’re listening to the current one, and you never even really see its selection process.

5 New Problems

But the Roomba and Spotify aren’t people, of course, they are narrow AIs that have been purchased from companies — or Retailers in this model — providing that product and service (respectively) to the market. These agents seem to lack the self-interest that caused June to grab a detour beer with her cousin, their behaviors are complicated and inscrutable to the user. Depending on how you look at it, this either introduces many new problems or exposes more opportunities to exploit.

1. Robot Keggers

In the far future, when AI looks less like Roomba and more like C3PO, we might in fact expect their self-interest to become a problem for both users and Retailers. You head out the door to work, and your house robots start throwing their equivalent of a keg party until it’s time to clean up before you get home. This robot Risky Business depends on a general AI that is, as yet, a long way off. The risks we have today are more nuanced but will balloon as AI gets more powerful and agentive technology becomes more common.

2. Roomba Spies

One of the most seemingly benign problems is that agents report data about their performance back to the retailer. Last July, Maggie Astor of the New York Times published an article describing how the little vacuuming robots build a map of user’s homes to do their job better, but when they send those maps back to iRobot, iRobot winds up sitting on some unique insider data about its consumers. You might think a floor plan of your home is pretty innocuous, but knowing the difference between a small apartment with one couch and TV and a larger living room with lots of furniture and a huge media center says some things that would help advertisers get better at targeting their messages. It’s not just corporations that would be interested in that data. The thief casing a neighborhood would want to get their hands on it even more.

No armchair in your living room? You might see ads for armchairs next time you open Facebook. Did your Roomba detect signs of a baby? Advertisers might target you accordingly. — Astor, NYT

3. “Friendly” recommendations

With the Alexa example, Amazon is actually doing a bit of useful work for its user. If that user was interested in supplying specifics, they would have done so. But if the user has no preferences for what brand of, say, a brand of shampoo the use, what’s the harm in letting Amazon recommend a partner? It’s a small personal cost, but the “friendly” recommendation may prioritize products that run counter to your ethics, pocketbook, or interests.

There’s an additional problem when agents present as neutral. In 2008 David Braue of CNET showed in a series of tests that Apple’s shuffle algorithm favored iTunes purchases and certain recording labels. We can presume these are back-door deals made by Apple with these labels. By labeling it “shuffle” Apple brought to mind the shuffling of cards, but that metaphor is misleading, since only a cheater shuffles to control the results. The unmarked use of the term implies a neutrality that just isn’t there. The actual agency costs in this case are small, you just don’t hear beloved music as much as you would expect, but it’s easy to imagine the same behavior in a higher-stakes domain, or with even bigger assertions of neutrality. Imagine a stock portfolio manager that does the same, and you can see the danger.

4. Hackers and Flash Corruption

Keep in mind that the behaviors of agents aren’t permanent. Nearly all of them are networked in some way or another. Their programming can be updated remotely. Hackers can compromise insecure agents to do their bidding. How do you know with certainty what is in control of the self-driving car you’re about to get into? Is it being subsidized by deliberate detours or slowdowns near certain billboards or storefronts? Is it a kidnapper?

Even less-overtly-criminal retailers can send an update that performs a questionable act or gathers questionable data and then quickly send a patch to erase the behavior. It’s more troublesome to catch a thief when it’s thief-ness amounts to seconds.

5. Black Hat Retailers

And of course there are even just straight-up liars and cheats. For instance, in the US if you give your vote to a machine you may naïvely trust it to deliver that same vote up the chain. But as the wretched Diebold scandal illustrated, the Retailer can just trump the user’s wishes if they’re evil, have power, and don’t fear consequence. This is retailed in indirect sense, but as consumers go shopping for less expensive and wander into unfamiliar brands, shipped in from another country for which legal recourse would be difficult at best, how do they know what they’re bringing into their homes isn’t just as wretchedly biased?

4 New solutions

I’m a designer and so very interested in thinking through not just the problems that new technology introduces, but also what solutions can be brought to bear. Some of these are more sci-fi than others, but all stem from the P-A-R problem.

1. Butler agents and GUID tags

You’ll recall that one of the ways the classic Agent-Principal dilemma was solved was through increased surveillance of the agent. While this straight-up marks distrust of a human, there is no pride or feelings in agentive tech to worry about. So increasing the surveillance of agents is one possibility. But, one of the main benefits agentive technology provides to users is reducing burdens of mundane tasks. If we have to stay home all day to police our Roomba, it’s not conveying that benefit. The policing ought to be as agentive as the service, just not beholden to the same retailer. This could be a specifically-designed 3rd-party “butler” agent meant to keep tabs on the other agents through permitted sensor networks. Its reputation would hang on its loyalty.

You can imagine agents monitoring physical and virtual spaces for any unusual behavior and broadcasting alerts, but given how difficult it might be to separate human and AI behaviors, runs too far in the direction of a police state. A solution for agentive robots may be to require GUID owner-identifiers on every single one. The FAA already requires owner identification and contact information for drones in the US. Perhaps this should extend to Roombas, self-driving cars, and any agentive robot as well. If you find a hacked gardening drone casing your backyard, you should have an easy way to find its owner and its retailer to get questions answered, even at a distance.

2. White Hat Retailers and 4th estate auditors

An unlikely source is the White Hat retailer who makes their agents, policies, code, and networked communications open to inspection by the user. While regular Joes won’t be able to make much heads or tails of code, journalist-hackers can then do their fourth estate magic. That is, they can do the inspecting and exposure when things look fishy. The open-source and open-traffic notions run privacy risks for users, but the same journalist-hackers would be watching for these infractions. These Retailers would build a brand on transparency. But when Equifax can still be in business after the gross contempt it has displayed for consumers’ privacy (and even use the occasion to try and make money), it’s not clear to companies that White Hat strategies are needed. Hopefully legislators will lock that down soon, but we’ll have to see.

3. Market “Negotiations”

The second method to solve the classic A-P problem is that of aligning incentives. Design the relationship, the argument goes, to ensure that the Retailer has a solid incentive for only serving the principal as the principal would want. Unfortunately in the retail space, consumers have little voice for setting those terms, so it will play out through indirect means. Users can…

  • Not buy it (not always a pragmatic option)
  • Switch to a competitor (when available)
  • Hack it (and nullify warranties)
  • Return products that are discovered to have a high agency cost.

As you can see, none of them is as wholly satisfying (or powerful) as collective bargaining would be. As agentive tech is still gaining ground, I suspect social media shaming and user outrage that threatens retailer brands will perform the function of these indirect, bottom-up negotiations. The market can not solve all problems, but for retailed AI, it may be the thing that AI retailers will pay most attention to.

4. AI Laws

The top-down part of the solution is to have strong consumer protection laws. Law always lags behind actual advances in technology, but it’s still important as leverage to keep AI retailers answerable to infractions against privacy and ethics violations. I would hope there will be a broad body of law that specifies general ethical behavior for agents and shores it up with individual case precedent. The EU leads the US by a longshot (one example: Their General Data Protection Regulation comes into effect this year.) and I hope we follow their lead.

I hope there will be continued government support and protection auditors. Sadly, the scary trend of late is in the opposite direction, with last week’s party-line House vote to close the Election Assistance Commission, the only agency whose sole mission is to assure voting machine integrity. In the light of the Russian election interference, it signals some fundamental challenges to counting on this route to reduce agency costs.

***

So which is it?

I wouldn’t expect that any one of these solutions to be the magic bullet that suddenly eliminates agency costs from retailed agentive technology. It’s much more likely to be some combination of them. However it happens, designers, product managers, and strategists need to be aware of the broad risks associated with agency costs and retailed AI so they can design to mitigate it. Consumers who care about what their agents are doing on their behalf have to build up a literacy. Hopefully some sci-fi authors can help illustrate the problem and encourage solutions before it brings trouble in the real world.

It would be lovely if we didn’t need these complications. But if you’ve studied your Bakan or even your Stross, you know that we need to keep a sharp eye on the ways that retailed behavior gets encoded into the agents around us.

***

This article was originally published on LinkedIn.

In addition to writing, speaking, and consulting about design and technology, Chris leads the Design Practice for the Global Team of the Travel and Transportation sector at IBM.

--

--

Chris Noessel

Chris is a 20+ year UX veteran, author, and public speaker. He delights in finding truffles in oubliettes. Tip me in coffee at ko-fi.com/chris_noessel.