Service Management and identity

Identity & Access Management (IAM) systems need to be reliable, perform well and have adequate end-user support.  When assessing Service Management needs for an IAM environment,  a number of factors need to be considered.

Wikipedia defines IT Service Management as ‘is a discipline for managing information technology (IT) systems, philosophically centered on the customer’s perspective of IT’s contribution to the business.’  Service Management for an IAM system needs to consider both the business area and end-user needs – and these may differ depending on perceptions, actual usage patterns and functions of the IAM system.

To this last point, IAM offers a wide range of functionality: authentication (login), authorization, account creation, provisioning, administration, reporting, etc.  The service management profile for these functions can vary; for example, login and authorization services need to be highly available and well supported, while functions like reporting are less critical.  Assessing service management for IAM, therefore, needs to look at each functional area of the system.

Data centre service management has swung wildly in the past 30 years, from centrally controlled and highly available mainframe environments to more lax client-server setups of the late 80s and early 90s.  Today’s expectation for the quality of enterprise data centre services has returned to a more strict standard.  Business sponsors and users expect the equipment, network and services to be highly available, with scheduled outages and evergreen plans (for future expansion).

Help desk services need to be assessed to ensure IAM services are properly supported.  Help desk support can range from the basic email, ‘best-effort’ model to full 24/7 phone and remote take-over support.  Understanding end-user requirements is critical to striking a balance between help desk costs and a quality support model.

I’ve had a number of clients identify 24/7 support for their infrastructure and help desks – and then balk when the cost of such a service is realized.  The justification for the blanket support is that if the application is promoted as being online, it needs to always be available.  Many clients want to respond to help requests when they occur.  However, in my public sector experience anyway, these systems (and the IAM providing protection) are often used for non-critical purposes.  Registering for a program, accessing document stores, even retrieving information needed for business purposes – these types of transactions are rarely critical in nature and tend not to deserve ’round the clock support.

In the private sector, the decision to provide this type of support is strictly a financial one.  The cost of supporting users versus the revenue gained (and the long-term benefit to the brand) can be calculated to support extended hours for support.

IAM that supports medical systems are perhaps the one type of system that will always require extended support hours, highly available systems and responsive end-to-end architectures.  This is particularly true for systems that support health workers (physicians and nurses) and their access to patient and reference information.  Failing to implement an appropriate high level of service management for the IAM systems used in healthcare can be disastrous.

It will be interesting to see what the new breed of patient-oriented portals choose to provide in the way of redundancy, performance and support services.  These emerging systems are geared to providing patients access to their own health information – data that they can use for education, self-diagnosis or treatment – but it isn’t clear that the portals will need to be highly available.  If they do, the sponsors will need to dig deep to fund their operations.

Service management is key to a sustainable identity management solution and a proper assessment of technology, people and processes is an important part of any IAM review.


Related: Kuppinger Cole have an article on ITIL vs IT Service Management that is worth a read.

Assurance and identity

Because I spend most of my days implementing IAM systems, Identity Assurance is a bit of a pet topic of mine – it seems that IAM design frequently comes back to the type of information being accessed and the quality of the end-user’s identity.  In enterprise systems that provide access to sensitive information, a review of Identity Assurance is critical to ensure appropriate controls are in place to protect that information.

Identity Assurance is, according to Wikipedia, ‘the ability for a party to determine, with some level of certainty, that an electronic credential representing an entity … can be trusted to actually belong to the entity.’  Identity Assurance is commonly expressed in ‘levels of assurance’, ranging from 1 (low assurance) to 4 (very high assurance).

When doing IAM assessments I have found many client systems have been built without levels of assurance in mind.  Systems with sensitive information are accessed with the same electronic credentials created for a system with basic, publicly-classified information.  In other words, an account is created for a simple site and reused for access to a site with more confidential information.

This poses a number of problems…

  • The credential itself is not of sufficient strength to access the confidential site.  For example, the password rules used may be sufficient for the simple site but are not strong enough for the confidential site.  This could make the confidential site prone to vulnerabilities (e.g. dictionary attacks on weak passwords) that would have significant consequences.
  • The credential has been issued to a user without adequate identity proofing.  There are many examples of low level credentials from social media sites.  An OpenID based on a Google account is not verified and linked to a real-world user – something that may well be fine for access to Google apps.  But accepting that same self-issued credential to access more confidential information is likely not appropriate without increasing the identity assurance.
  • The user may no longer be in sole possession of the credential – either they have stopped using it for an extended period (and it has been unknowingly hacked), or they are willingly sharing it with a co-worker, spouse, etc.  Sharing a credential is actually fairly common within households, especially for access to family blogs, Flickr and other social media sites.  Using such a credential for a sensitive application poses a number of risks.

Fortunately there are some excellent standards and frameworks for determining appropriate levels of assurance.  These tend to be based on a business-driven information classification exercise, i.e. the level of assurance required is directly related to the sensitivity of the information and how it is used.  Once that classification has been performed, the assessment can be done to ensure:

  • appropriate identity proofing is performed;
  • the credential is issued in a secure manner;
  • the credential’s lifecycle is properly managed (e.g. dormant accounts are revoked);
  • the credential has been properly authorized to be used by the application or site; and
  • the technical environment in which the credential is used is appropriately managed and secured for the type of information being accessed.

By understanding the information being accessed and applying a standardized process to assessing Identity Assurance, the strengths and weaknesses of the IAM system can be readily determined.


Code Technology now on Facebook…

I have blogged and tweeted many times about Facebook’s deceptive privacy policy and ‘promiscuous’  identity information sharing practices.  The company has a track record of misleading its users about what personal information will be shared with advertisers and other users.  And personal profile details are being shared even wider (source: Matt McKeon), with an increased emphasis on pushing personal information out to the Internet.

I don’t like Facebook because Facebook have  not shown themselves to be trustworthy. So I have never setup a Facebook account.  I’m not among their 500+ million users.

But I recently figured out that you can have a company presence without the need to have a personal account.  So today I decided to setup a company page. My reason for doing so is simple: Facebook as a platform is rather important.  Without a page on Facebook, Code Technology is less visible to many hundreds of millions of people.  In fact, without a page, my business is invisible to those people who think Facebook IS the Internet.

On Code Technology’s Facebook page there will be links to this blog, an occasional update, identity management-relate pictures perhaps and possibly a video post.  What I won’t share is any personal info — the company hasn’t earned my trust (and likely never will).

Please visit the Code Technology Facebook Page and let me know what you think.  A ‘like’ won’t hurt…  🙂

thx, Mike

Availability: Is it over-rated?

The general impression is that identity management systems — in particular the authentication and authorization components — need to run continuously in order to ensure users can access their business applications.  Enterprise IT shops are accustomed to building in redundancy in hardware and rigorous process to ensure systems ‘stay up’.  It is not uncommon for a critical business system to have a target up-time of 99.99% or even 99.999%.

Why is this the case?  Information systems availability (or lack thereof) can impact productivity and, for private businesses, profitability.  And the case of medical systems — actual clinical systems, not informational websites — an outage to a system can impact health service delivery and have negative outcomes for patients.  As a result, it is common for an identity management system to be designed for high availability, and for organizations to fund (hardware, software, people, etc.) the service at a level appropriate to meet this goal.

But not all systems need this type of high availability.

Take, for example, a set of web applications offered to the public for access to government information.  These applications represent a sub-set of the business being conducted, i.e. they could be used to apply for funding or to access a library of online information products.  In my experience, this type of public-sector system is by far the most common type of application used by citizens online.

So here’s the bombshell — these type of systems do not need to be highly available…  24/7/365 access, highly redundant services, on-call technical analysts, etc. are not part of the requirements for these web applications.  Why? Because the expectations and needs of users for access are not as high as enterprise architects and overly concerned business folks lead us to believe.

Think about it for a moment: if a government website or application were down, what would most of us do?  Call our elected representative in a rage?  Close our business? Drive to the nearest service centre?

No. We’d do what we do when other websites are unavailable — surf on to the next one, spend some quality time on Facebook or check our email…


Blogroll update

It has been a while since I strolled through my own Blogroll… there is always good content in there worth sharing.

  • Mark Dixon is back blogging — here’s a great post on how to make a bad fake ID…
  • Patrick Harding has an interesting write-up on ADFS vs Ping terminology.  Interesting (to me) given that I’ve been working on an ADFS v2.0 project lately…
  • Kim Cameron also returns to blogging with a a new post — a video interview that delves into Identity Federation and the cloud.
  • Jeff Bohren has some criticism of Apple’s handling of the iPhone 4 reveal by Gizmodo.  Seems the ‘iPolice’ are confiscating first and asking questions later…
  • David Fraser the Canadian privacy lawyer offers up a balanced view of StreetView and privacy non-issues.

And finally:


Facebook’s latest privacy troubles

After years of controversy, Facebook may well end up in a Canadian Federal court this fall.

In August last year, Canada’s privacy watchdog, Jennifer Stoddart, announced that Facebook had agreed to improve its privacy protocols to be compliant with the Personal Information Protection and Electronic Documents Act (PIPEDA).

But instead of working to address the concerns, last December Facebook implemented changes that effectively further reduced user privacy.  These changes effectively required users to manually modify settings to avoid friends, personal information and photos from being shared.  According to the Wikipedia entry Critcism of Facebook:

… a user whose “Family and Relationships” information was set to be viewable by “Friends Only” would default to being viewable by “Everyone” (publicly viewable). That is, information such as the gender of partner you are interested in, relationship status, and family relations became viewable to those even without a Facebook account.

Facebook clearly have decided that the increased revenue possible from sharing personal information is worth battling government privacy commissioners and lawyers.  And that’s fine — so long as our government continues to enforce our laws and bring violators to account, we can play that game too.

I’ve never had a Facebook account.  I can be patient.

But those that still trust Facebook with personal information — and haven’t bothered to examine the minutia of the site’s privacy settings — will continue to have their personal information shared with 400 million users and thousands of advertisers, data aggregators and, well, pretty much anyone else on the Internet.  At least until the wheels of justice grind to conclusion…


Top 10 identity attributes

There was a really interesting discussion going on at the LinkedIn Identity Management Specialists group a while back about the top 10 identity attributes.

My contribution:

  • First Name
  • Last Name
  • Date of Birth
  • Gender
  • Former Last Name (at Birth)
  • Location of Birth
  • Passport number
  • Drivers licence (or state/province) ID number
  • Professional or trade registration number
  • Bank account number

If you have a LinkedIn account this group is worth following. And for Canadian readers, check out Canadiam – IAM in Canada.


Google’s latest privacy troubles

Update 05/27: Kim Cameron has an excellent post on this issue (and a clarification here) that illustrates the identity impacts of Google’s wifi scanning.

It would appear that Google’s Street View cars were actively collecting data from unprotected home wifi networks over the past several years.  According to the New York Times article:

After being pressed by European officials about the kind of data the company compiled in creating the archive — and what it did with that information — Google acknowledged on Friday that it had collected snippets of private data around the world. In a blog post on its Web site, the company said information had been recorded as it was sent over unencrypted residential wireless networks as Google’s Street View cars with mounted recording equipment passed by.

I’m not sure how to react to this but it sure raises some questions:

  • Why would the Street View cars be scanning for unprotected networks in the first place? The company has said it helps to improve geo-location but given the other tools at its disposal, I suspect they weren’t relying on home network MAC addresses to keep their location data accurate.
  • Why would they then record user data — web sites visited, emails sent, etc. — and subsequently store it on central servers? How can this be classified as a  ‘programming error’? Perhaps that explanation could fool some of the less technical authorities, but let’s get real here — systematic recording of user generated data when only the MAC address is needed IS NOT a programming ‘error’.  It is a ‘function’.
  • Why would this only come to light after four years and why did it take a demand from a German official to inspect the car’s missing hard drive for this to become public at all?
  • Are we getting the full goods from Google, a company known for its privacy transgressions?

Companies like Google (and Facebook, a company with privacy troubles of its own) are successful because of the goodwill and trust extended to them by us.  There are other search engines and cloud services out there we can use.

Breaches like this are bad enough — the pithy excuses and blatant PR spin when caught are even worse.


The case for less ocean boiling

I don’t know who invented the term ‘boiling the ocean’ but it is a great description for projects that are too large, too ambitious and, ultimately, headed for failure.  Identity management projects run the risk of being setup to fail because their sponsors are trying to boil the ocean.

The problem lies in the scope of a typical IAM project.  These projects can often try to do far too much — the sponsor and project manager confuse the bigger, long-term goal with the project objective.

In a IAM strategy report I completed last year, my recommendation to the client was to use a phased delivery approach:

Smaller projects are easier to manage because they have a single focus and set of outputs to produce.  Adjustments to follow-on projects can be made based on lessons learned.  And with shorter projects, management can more frequently see the real results as each project completes – reports/briefings can be written and financial benefits documented.

Since 2007, I’ve been working as the IAM Program Manager for another client and putting these ideas into practice.  This program has been established with eight releases.  Each release is a project that runs between six and eight months, depending on the current needs of the business and our sponsor.

The key for keeping these projects short is scope management. The scope is determined a month or so before the project starts.  Inevitably, we get a change in scope sometime in the first few months — a new application needs to be integrated, the auditor wants a critical feature added, etc.

As any project manager knows, the triple constraint means that if you increase your scope, you can either extend the project schedule or add resources (and cost) to get the work completed on time.  This triple constraint is often addressed by clients by pushing out the schedule.  This also increases cost of course, even if the team size stays the same.

My philosophy is a bit different.  I look to trade off scope for… scope.  If six weeks of extra work are added, I will look to see if six weeks’ worth of scope can be removed.  Lower priority work can often be removed from scope and planned for the next project.

In other words, I want to keep the project schedule and costs fairly static so that the team can focus on an end date — and, by extension, new work and a new project after that date.  In a longer term program I place a lot of value in delivery.  The sponsor needs to see solutions delivered and projects closed.  We must be able to report to senior management actual business value and a list of real accomplishments, not just the percent complete on a project.

The end results are higher team movtivation over the long haul — in three years, I have had zero team turnover — and better, more measurable business results.