Responsible Disclosure

Forgive me father, for I have sinned.

Unless you’ve been living under an internet-shaped rock for the last few weeks, you would have seen a handful of security issues disclosed online in New Zealand. Wheedle, ListSellTrade, Geta, and more recently MSD. I joined the gleeful pile-on around the auction sites in particular, which was amplified by the apparent stupidity of trying to compete with Trade Me using a half-assed webite.

In retrospect, I should have been more circumspect (put your hands in the air and say yeah).

In case you didn’t know, there are protocols around responsible security disclosure. The OIS has a weighty tome on the matter, but here’s a simplified overview:

  1. Discover a flaw. Do not exploit it.
  2. Notify the owner of the flaw in private, giving them enough detail to find and resolve the flaw.
  3. Give the owner of the flaw enough time to reasonably notify their users and/or resolve the flaw.
  4. After waiting for the time above, disclose the flaw so that users can make themselves safe, and so that others can learn from it.

It’s pretty clear that in the case of the MSD flaw, both Keith Ng and Ira Bailey acted responsibly by notifying the MSD (step 2) and not going public until MSD had undertaken to close the kiosks (step 4). In fact, listening to Ira discuss the disclosure on Radio NZ (mp3 link), I’d like to apologise and withdraw my accusations of douchebaggery (thanks @rmi). But I still have questions about bug bounties, read on.

In the cases of Wheedle et al, exploits were being thrown around on Twitter with abandon (by myself and others), and this was wrong.

In our defense, the sites were all brand new and fundamentally flawed, so the voracious takedown was low-risk. But it was still wrong. I was aware of others notifying the site owners properly (@dylanreeve is a stand-up guy for example, trying harder than I would to get hold of people behind the scenes), so I didn’t bother to do so myself.

About those Bug Bounties

In some cases, companies provide a “bug bounty” for users that discover security flaws. This is for a couple of reasons: firstly because there is value in having these flaws discovered and resolved before they are made public; and secondly it acts as an incentive for “black hat” hackers to move from step one to step two above. Hackers can opt for a quick, legitimate pay-off, rather than exploiting the flaw for possible dubious gain.

In my opinion, it’s totally kosher to ask a private company for a bug bounty. It’s in their interest to close the hole, and most responsible companies should have a public bounty policy, because even the best operational security is not going to keep up with every single exploit.

But a government department? I’m not sure about this one. On the one hand I think it’s our social responsibility to help these guys out as much as we can. Maybe I’m a wet pinko liberal socialist, but we’re all in this shitfight called the Internet together, and I think it’s a bit much to ask for a bug bounty on an issue that affects the most vulnerable  in our society.

But then I read about $50k for a 2-week Delloite review and think that maybe a $2k reward per bug would go a long way to making that review irrelevant.

I dunno. What do you think?

6 Replies to “Responsible Disclosure”

  1. “But then I read about $50k for a 2-week Delloite review and think that maybe a $2k reward per bug would go a long way to making that review irrelevant.”

    The reward system cannot replace a formal review because non-public systems must be assessed for security and “white hats” do not (normally) have access to those systems.

    I’d argue that a reward system should complement a formal review/audit process.

    “But [rewards from] a government department? I’m not sure about this one.”

    The incentives for Government departments are similar to those for Google. Government-owned sites and software are accessible outside the country and carry the same (maybe increased?) risks for malicious activity vs corporations.

    So why not offer a reward programme? On a purely economic basis wouldn’t it make sense?

  2. I wrote a post about this when Keith first outed Ira’s name. Among other things I suggested that the government *should* implement a bug bounty.

    We already have the DIA and NCSC that have some remit to oversee government IT systems. I think they should establish a public facing reporting mechanism and the ability liaise with all government IT departments and contractors on these issues.

    http://dylanreeve.posterous.com/dealing-with-wtfmsd

  3. So how does this apply when, like Wheedle, they have clearly taken no care at all with security? When they quietly close that specific hole and move on, no doubt with many many more lurking problems waiting to happen?

  4. Paying a bounty for actually finding an exploit is a bit like paying a recruitment fee. You only pay if you hire someone. Govt dept’s have no problem paying recruitment fees because there is no cost unless they get what they want. It seems to me that paying a bounty makes similar sense because you get exactly what you want – discovering a dangerous problem quickly. While being socially responsible is great, in some ways this is a business issue. If Deloittes had gone to MSD and said they’d found a flaw I think we can be sure they would’ve been immediately hired to point out what it was.

    1. Formal auditors like Deloittes simply can’t take that approach and prospect for work by poking at systems uninvited. Nor would you want them to.

      In fact it appears that MSD had already received a report from experts retained to examine at least some of their systems detailing a number of security issues and that those warnings were ignored or not followed up sufficiently.

      http://computerworld.co.nz/news.nsf/news/earthquake-data-accessible-through-winz-kiosks

  5. I think the biggest argument in favour of bug bounties is that finding and documenting security exploits in public-facing websites is, quite simply, work, and it’s work that people are usually paid quite well to do, because some amount of training and experience is involved in doing it correctly. In cases like Wheedle, people were getting payback in the currency of attention and fun, but it still wasn’t being done for no reason.

    It may be noble and decent for someone to direct attention to a security hole free of charge, but it also isn’t unreasonable to want it to not be free. This can also be a form of blackmail, but that’s usually pretty obvious and there are strategies for dealing with it. And, frankly, if an organization is doing security right in the first place then the vast majority of exploits should be being discovered by their own people – in which case, no need for a bounty.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.