Skip to content
May 27, 2015 / Jim Fenton

Not a happy WordPress Premium customer

Broken lockSince I’m using my blog more actively to publish information about Nōtifs lately, I thought it would be a good time to upgrade my blog to WordPress Premium. For $99/year, I get a number of benefits including removal of ads from my blog. One of those benefits is the use of a custom domain name, and since I registered altmode.org several years ago, I thought it might be good to go from altmode.wordpress.com to altmode.org.

Everything went quite smoothly. Then a friend called. “Jim, since you’re a security guy, I’m surprised that I get a warning when using HTTPS to read your blog.” Sure enough, accessing https://altmode.org/ resulted in a warning, because the site presented a certificate that is valid for *.wordpress.com, not for altmode.org. I hadn’t seen this error because I generally browse my own blog through the administrative interface via altmode.wordpress.com. I should have anticipated this because I never got a request to provide or approve a certificate for altmode.org.

I contacted WordPress support about this. Their response:

WordPress.com currently doesn’t support SSL for custom domains, so you can avoid that error message by giving out the http:// version of your site’s address:

http://altmode.org/

Modern browsers will usually give a warning if you try to visit a site starting with https:// when SSL isn’t supported there, so the best way to avoid that is to make sure that links to your domain begin with http:// instead. We also have a page about HTTPS with more details and other options for turning off those browser warnings.

So the answer is, basically, don’t use TLS (SSL). It did before my upgrade to premium, but doesn’t now. Not something you want to tell a “security guy”.

There are a number of reasons this is a problem:

  • More and more people default to using TLS if they can.
  • Presenting a certificate for the wrong site just trains users to ignore these warnings, making them less secure.
  • A premium feature should be an upgrade.

How would I like it to work? WordPress should tell me they’re going to obtain a certificate for my domain, and to approve the request for it that their Certificate Authority will send me. Or even send me a Key Signing Request and let me buy the certificate.

What are some alternatives? One is to operate an HTTPS reverse proxy, and map https://altmode.org/ into https://altmode.wordpress.com/ myself. Another is to move my blog to a self-hosted WordPress site, but I’m not sure I want to deal with the frequent security issues I have been hearing of. A third is just to turn off the domain mapping, and decide whether the cost of WordPress Premium is still worth it (mostly for ad removal).

I’ll decide which way to go soon, and apologize for the warning messages in the meanwhile. Please don’t just click “accept” when you see one of these, OK?

Image: “Broken Rusty Lock: Security (grunge)” by Flickr user Nick Carter used under CC BY 2.0 license.

May 11, 2015 / Jim Fenton

Nōtifs: Now Open Source!

Nōtif codeI have been doing quite a bit of work and a few presentations on Nōtifs lately, and have gotten my prototype code to the point where I am comfortable publishing it on GitHub. There are three relevant repositories:

Notif-agent is written in Go, largely for performance reasons although I expect to be making use of Go’s “goroutine” concurrency capabilities in the near future. The code in the other repositories is written in Python; notif-mgmt uses the Django web framework.

Getting a Nōtifs agent running for testing using this code isn’t for beginners, but it’s not a huge task. The biggest issue is that Nōtifs lend themselves to the use of a schemaless (non-relational) database, for which I use MongoDB. Unfortunately non-relational databases aren’t supported by the official Django distribution, but there is a fork with MongoDB support.

Writing a prototype like this of course gives me a way to show people what Nōtifs are, which has resulted in a lot of valuable feedback. It also forces me to think more thoroughly about the ways things fit together. It also forces a certain amount of discipline about resolving some of the details.

One of the main areas of feedback has been that it will be challenging to get users and notifiers to start using Nōtifs. As with many communication protocols (and identity management technologies, too), there is a “chicken and egg” effect: Potential users (notifyees) will only want to invest in getting started with Nōtifs when there is a reasonable number of notifiers, and potential notifiers will only be interested when there is a reasonable number of people they can reach through Nōtifs. My approach to this is to provide users with the ability to convert other methods of notification into nōtifs. I’ll be starting with Twitter and RSS feeds.

March 30, 2015 / Jim Fenton

How to describe Nōtifs?

Notification examplesIt has been several months since I have blogged or presented on Nōtifs, the notification concept I described last fall. But I have been working actively on it, and now have an early prototype running that I expect to talk about and demonstrate at the 20th Internet Identity Workshop, April 7-9 in Mountain View, California. It has been refreshing to be writing code again, and there’s no substitute for “running code” when it comes to working out the bugs in a concept like this.

A friend that I frequently talk with pointed out the need for me to think about how to describe this, since it doesn’t quite fit into any of the usual categories. Thinking about how I have described it to various people, I realize that I have different ways of talking about Nōtifs depending on the audience: how interested they are in the technical details, what their field of interest is, and how much time I have to do the describing. What follows is a collection of ways that I might describe Nōtifs, or aspects of Nōtifs that might be interesting (in no particular order):

To a privacy advocate:

Nōtifs allows you to receive notifications – everything from emergency alerts to advertising and newsletters – without giving the sender a persistent address like an email address or a phone number. It also doesn’t reveal anything new about you to the notifier, other than the name of the agent (which is likely to be shared with others).

To someone concerned with spam and phishing:

Nōtifs gives you a way to receive notifications without spam and phishing messages. All notifiers must sign each notification and must be  authorized by you, the recipient. Notifications you expect to receive via Nōtifs will be treated with more suspicion if they arrive by email instead.

To someone concerned with inbox clutter:

Nōtifs help you manage your notifications by allowing notifiers to update and delete notifications, rather than send new ones. They can also set best-effort expiration times for notifications whose value has limited duration.

To someone working on the Internet of Things:

Nōtifs is instant messaging for the Internet of Things.

To someone who is involved with email:

Nōtifs is a one-way messaging medium that does not replace email, but it is a better medium for some of the things we currently use email for.

To an advertiser or newsletter publisher:

Nōtifs provides a way of reaching [potential] customers without the deliverability problems of email, particularly spam filtering. Intermediaries like email sending providers are not necessary except to send very large numbers of nōtifs quickly. You get positive confirmation when the recipient’s agent accepts each notification. If users opt out of an existing notification, the next notification you send will tell you so you can remove them from your notification list and, if appropriate, re-engage with them in some other way.

To an emergency agency:

Nōtifs provides a way to quickly send notifications to large numbers of people that have subscribed. Message priorities allow you to distinguish emergency alert notifications from advisories and community information messages.

To someone who is concerned user control over their communications:

Nōtifs give users control over who and what notifies them. Users can also control how they get notified in response to a particular nōtif, and can opt out of any notification simply by telling their agent.


Got any more audiences that might be interested that I haven’t covered? Any better descriptions for these audiences? Please let me know in the comments.

 

 

January 14, 2015 / Jim Fenton

Adventures with IPv6 Path MTU Discovery

World IPv6 logo

As I mentioned in a recent blog post, after signing up for a Social Security online account, I wasn’t able to access the login page at their website from my home. After some investigation, I discovered that the problem was caused by my “tunneled” IPv6 connection, which only accepts packets of a certain size (maximum transmission unit, or MTU) and no larger. The website wasn’t discovering my MTU correctly, and as a result large packets it sent just weren’t making it to me.

Path MTU Discovery

For those of you unfamiliar with IPv6, the next generation of Internet protocols, here’s some background on how this should work. Most systems send packets of up to about 1500 bytes, the limit imposed by Ethernet. But there are some network paths that can’t handle packets that big.  Connections that are tunneled, which means that a packet is enclosed in another packet, can’t handle packets as large because of the size of the enclosing packet.  The IPv6 connection to my home is tunneled, enclosed in IPv4 (the predominent, “old” Internet Protocol) packets, because my Internet Service Provider doesn’t yet provide IPv6 service in my area and I wanted to experiment with it.

Two ways that oversize packets can be handled are fragmentation and Path MTU Discovery. IPv4 uses fragmentation, which calls for routers to split incoming packets into two or more pieces and send them onward separately. That’s a fair amount of overhead for the routers, doesn’t address of the root cause of the problem, and is otherwise considered harmful. IPv6 uses Path MTU Discovery, where the router that runs into the MTU limitation sends the originator back a message, called an ICMP Packet Too Big message, asking them to resend at an acceptable MTU. The sender is expected to adjust its packet transmission and send this and subsequent packets at a forwardable size.

My Problem

Unfortunately, the symptoms of an MTU blockage aren’t at all obvious. In my case, the website download just stalled. I did a little work with a debugging tool called Firebug and it showed that link I clicked had redirected to another site, and then nothing. Another tool, Wireshark, that shows the individual packets, showed that I had opened a connection to the website, sent a “TLS Client Hello” packet, (the first step in making a secure https: connection) and then got nothing, except the keep-alive packet. The response to a Client Hello is a Server Hello, which is typically quite large because it contains such things as the server’s certificate. I retried from a nearby coffee shop that happens to have IPv6 for its customers, and it worked from there. It also worked if I manually tweaked my network interface to smaller packets (the server takes the hint and does likewise) or if I just turned off IPv6. Apparently the IPv6 Path MTU Discovery mechanism wasn’t working.

I began with the standard Social Security support resources, which of course aren’t designed for this sort of problem at all. I then consulted with a former colleague of mine from Cisco, Dan Wing, who suggested I try a mailing list dealing with US Federal IPv6 deployment. Unfortunately that didn’t reach any of the right people at SSA. Dan then tried a private mailing list for IPv6 providers, and got a lot of attention at very high levels. The person responsible for IPv6 deployment at Social Security reached out to me, and convened a conference call, including his firewall and load balancer engineers, during which I reproduced the problem.  They reported that they had not received any Packet Too Big (PTB) packets.

I opened a case with Hurricane Electric, my IPv6 tunnel provider (a free service, I might add). They reproduced the problem and sent traces indicating that they are sending PTBs in this specific case. The question became one of where the packets are being dropped.

I traced the route to Social Security’s website, and wanted to confirm that the last hop on the traceroute was, in fact, theirs. Their firewall engineer then noticed that a network, which turned out to be the one from which the Hurricane Electric PTB packets was originating, was blocked due to a DDoS attack that had occurred last summer. The block was removed (temporarily, for now) and the problem cleared. Social Security is currently determining whether it can stay removed, or what should be put in its place that won’t cause collateral problems.

Lessons

I learned a great deal about IPv6 from this experience, which was much of my motivation for deploying IPv6 in the first place. But that’s not everybody’s motivation, and we need IPv6 to work reliably for people who aren’t network engineers with enough time, persistence, and helpful friends to solve these problems. Here are a few take-aways from this particular experience:

  1. Path MTU Discovery failures cause problems that aren’t always obvious and easy to spot.
  2. Especially at this stage of IPv6 deployment where more people have to use tunnels to get IPv6 service, having Path MTU Discovery work correctly is very important and should be included in testing (if it isn’t already).
  3. Bear in mind that PTB packets may come from intermediate routers in the network, typically not from the user’s endpoint, in designing firewall rules and access lists.
  4. Permit PTB packets wherever possible, including access lists used to mitigate DoS attacks (when this is consistent with the type of DoS attack, of course)
  5. Few Path MTU Discovery testing resources seem to exist on the Web. It would be nice to have some/more. As it happens, they probably wouldn’t have helped here since the problem was specific to particular IPv6 networks, but it would have helped eliminate some possibilities more easily.

Acknowledgements

I’d like to especially thank the many folks from the Social Security Administration Division of Network Engineering/Office of Telecommunications and Network Operations, whose responsiveness and commitment to getting this fixed were superb. Thanks also to Hurricane Electric, both for their free IPv6 tunnel service and willingness to provide support despite the fact that is is free, and to the folks on the IPv6 provider mailing list who jumped in to help me find the right people to contact. Special thanks to Dan Wing for using his contacts and for providing encouragement and advice along the way.

January 14, 2015 / Jim Fenton

How much entropy is in a name?

NameDistI have been thinking about authentication, and particularly knowledge-based authentication (KBA) lately. There are many variations of and uses for KBA, one of the common forms is the challenge or “security questions” that we are often asked to use as backup authentication. Sometimes these questions are chosen by the user, and sometimes by the service doing the authentication.

A classic challenge question dating from well before the use of online services is, “What is your mother’s maiden name?” Since one measure of authentication strength we often use is the entropy (roughly, how hard is it to randomly guess the correct value), I thought it might be interesting, at least as an academic exercise, to figure out the entropy associated with last names in the United States. Read more…

January 8, 2015 / Jim Fenton

Level of Assurance alternatives: A modest proposal

800-63cover

There has recently been a good deal of attention to the concept of Level of Assurance (LOA), and the unsuitability of a one-dimensional, four-level classification of authentication quality for many current requirements including NSTIC and many non-government uses that had adopted the LOA model. On top of that, President Obama issued an Executive Order last October that further motivates change by requiring two-factor authentication for all US Government transactions with consumers that release their personal information.

I have been thinking about ways to improve (or replace) Levels of Assurance for a few years now. Among the comments I made about NIST Special Publication 800-63 during its 2011 revision to SP-800-63-1 was that LOA didn’t support the NSTIC model of separate identity (authentication) and attribute providers very well, since LOA encompasses aspects of both. That would have required major surgery to SP 800-63, and would make its responsiveness to OMB M-04-04, the requirements document from the White House, less clear. So a paragraph was inserted in the introduction:

Current government systems do not separate functions related to identity proofing in registration from credential issuance. In some applications, credentials (used in authentication) and attribute information (established through identity proofing) could be provided by different parties. While a simpler model is used in this document, it does not preclude agencies from separating these functions.

Briefly, the four LOA levels are:

  • LOA 1: Some assurance that this is the same Claimant who participated in previous transactions
  • LOA 2: Single factor network authentication
  • LOA 3: Multi-factor remote network authentication
  • LOA 4: Strong multi-factor cryptographic authentication

Here are some of the problems that I see with the current LOA structure:

  • By combining authentication and identity proofing, strong pseudonymous authentication is not recognized. Pseudonymous transactions are limited to LOA 1 or 2.
  • LOA 2 is sometimes pseudonymous and sometimes not (requiring identity proofing). It should only mean one thing.
  • LOA 2 and 3 identity proofing requirements are different, but not substantially so.

To avoid confusion, we should avoid the use of the term “Assurance” in the new structure.

Proposal

I propose that we have two dimensions to the new structure, which I will call authentication strength and attribute reliability. Authentication strength refers to the technical strength of the authentication process itself, for example, whether it is single or multi-factor. Attribute reliability refers to the quality of the identity proofing process, as well as the strength of the binding of the identifying attributes to the authentication.

I am proposing that we have three levels each for strength and reliability, as follows:

Authentication Strength

  • Level 1: Some confidence in the authentication (typically username/password)
  • Level 2: High confidence (Two or more factors used together)
  • Level 3: Very high confidence (Two or more factors including a hardware token)

Note that account reset and recovery must be done at a commensurate level of strength with the original authentication.

Attribute Reliability

  • Level 1: Self-asserted
  • Level 2: Reliable attribute (Remotely identity proofed)
  • Level 3: Very reliable (In-person identity proofed or derived from identity-proofed assertions)

Detailed requirements for the strength and reliability levels will of course need to be worked out.

Why not more levels? One has to ask whether additional levels would be actionable on the part of relying parties. As it is, one could consider there to be are theoretically nine strength x reliability levels, although not all of them may be useful. High strength with low reliability might be used in cases where strong pseudonymous authentication is required, but I struggle to find a use for high reliability with low strength.

These strength/reliability levels map into the existing assurance levels as follows:

Level of Assurance Strength Reliability
1 1 1
2/Pseudonymous 1 1
2/Non-pseudonymous 1 2
3 2 2
4 3 3

 

What about other dimensions?

Various other dimensions have been proposed in discussions about LOA alternatives. For example, in a message to the IETF Vectors of Trust mailing list, Justin Richer offered a strawman that included assertion presentation and operational management. Assertion presentation (leakiness of the federation protocol) is more related to the choice of network protocols, and is therefore more directly visible to the relying party, as compared with strength and reliability that need to be explicitly asserted. Operational management is a characteristic that would impact the accreditation (through a trust framework or otherwise) of identity and attribute providers, and therefore generally outside the scope of a given transaction. Assertion presentation and operational management might, however, place upper bounds on the levels of strength and reliability that a relying party is able to accept from a given provider.

Additional dimensions have a major impact on the complexity of the framework, probably multiplying the number of possible combinations by a factor of 3 or 4 per dimension.

Acknowledgements

Much of the inspiration and many ideas for this framework come from discussion on the IETF Vectors of Trust mailing list.

January 1, 2015 / Jim Fenton

Resolutions 2015

Fireworks

I’m struggling with posting a New Year’s Resolutions blog post today, mostly because it’s probably not especially interesting to most of you who are reading. But I hope that in doing so, I will feel more accountable for my resolution. Not to mention the fact that, by publishing this blog post, I’m getting my resolution off to a good start.  But I won’t be offended if you stop reading here.

Happy New Year!

Read more…

Follow

Get every new post delivered to your Inbox.

Join 48 other followers