Authentication issue at heart of lawsuit

If you have followed this blog for any length of time you’ll know that I often return to issues and opportunities related to strong authentication.  Last week’s news from eastern Texas is therefore of interest…

Apparently a customer of the PlainsCapital Bank lost $200,000 through one or more electronic transfers.  The bank offers what they claimed to be a ‘two-factor’ authentication service.  After a user name and password are entered, the ‘second factor’ for authentication is an access code that is sent to the registered user’s email address.  The access code is entered by the user and their computer’s IP address is recorded (presumably to protect the session and for audit purposes).  Unfortunately for the bank and its customer, the emailed code was intercepted by what appears to be a Romanian hacker and the money was stolen via an unauthorized funds transfer.

By definition two-factor authentication must include two of three different factors: something owned, something known or something inherent (e.g. a biometric).  The first factor in this case is the user name/password combination, which is something known.  The second factor, the access code, is also something known.

Because both of these are in the ‘something known’ category, this is not two-factor authentication.  It may be stronger authentication that user name/password alone, but it is NOT two-factor.

The bank seems to have made an assumption that this code is ‘something owned’ because it was delivered to an email address that is controlled by the registered user.  The problem with this is that the email account itself is very likely protected by a single factor (a user name/password) that can easily be collected by any garden-variety keystroke logger.  The very idea that email is a suitable platform for sending secure access codes is odd to me — surely by now we all recognize the flaws in sending sensitive information via email?

An appropriate solution would include two unique factors combined with ‘security in layers’.  A user name/password plus a code sent to a registered mobile phone would be one example.  But I also like the suggestion in the article that layering good process — such as contacting the client (via phone) before such a large transaction was processed — would have also prevented this incident from occurring.

Perhaps it’s time to revisit what our Canadian banks are telling us about their security controls before casting stones towards our southern neighbours. It seems to me that without both strong authentication and security in layers, we — and our proud, large and stable financial institutions — are just as likely to suffer from this type of loss as this Texas bank.

Mike

Why invest in IAM?

I find myself being asked this question, indirectly or directly, by clients and prospective clients alike.  With all the demands on IT infrastructure spending and business application development (and integration), and with all the information security risks out there waiting for solutions to be implemented, why should an investment in IAM be a priority?

From the well-respected Kuppinger Cole blog comes this view:

Part of IAM’s job is protecting data, either directly or by protecting the systems that use and store data. That is also the backdrop against which compliance regulation, both internal and external, must be viewed. That also means that it is much easier to talk with business people about “access” rather than about “identity”. The big question is how do we control and monitor access to information and systems? To do that, we need to know who is allowed to do what – and who isn’t. The only way to achieve that goal is through true digital Identity Management. Anyone who thinks he can do it by granting rights and approvals based on IP addresses or MAC numbers is seriously kidding himself.

It strikes me as odd that there are still IT and information security professionals that believe IP and MAC access controls are sufficient, but it appears that this myth persists in enterprises.

Worse, I believe, is the view that the home-spun access control that has been built into legacy applications is ‘good enough’.  Why replumb our existing enterprise and customer-facing systems with a new-fangled IAM solution when we have the problem solved already?

This is a powerful myth that can be hard to overcome. But compared to application-specific controls, IAM has some significant advantages:

  • Compliance — Organizations today must comply with legislation and their own policies.  The access control sub-systems built-in to many legacy applications are simply not compliant, and it may require significant rework and duplicated processes to remedy.  Conversely, an enterprise IAM solution can be implemented to be compliant from the start, and a single set of processes can be created to maintain identity and access information.
    • Example: Privacy Impact Assessments (required in Canada for all projects that deal with personal information) can be done once and shared across all applications.
  • Audit Support — ‘Siloed’ access control systems are very difficult to report on at audit time.  With IAM, consolidating information is much easier and correlating a user’s access through multiple systems can be achieved.
    • Example:  A single reporting tool or sub-system can meet most (if not all) auditor reporting needs.
  • Help Desk Efficiency — With IAM, a single console for Help Desk agents can be implemented for end-user support purposes.  Naturally, a single system will offer improved efficiency and better service to end-users than multiple, application-based systems.
    • Example: Help Desk lookup tools can be standardized and easily learned by new staff. Password policies become consistent. Access to multiple systems can be suspended or revoked from a central point. Service to end-users improves.
  • Leverage and Speed — New applications, especially e-business and e-government systems that have to deal with privacy and security issues, can be readily designed around a common IAM solution.  Deployments can be rapid due to standardized interfaces and re-use of common templates.  Processes can be leveraged, not rewritten from scratch, making the transition to a production environment more seamless.
    • Example: Strategic applications that need to be implemented ‘right now’ can be rolled-out quickly with high security, advanced features and appropriate user privacy protection. Decisions can be made with confidence that the common IAM solution will meet both enterprise and line-of-business requirements.

Real IAM solutions offer real value, making business case development easier and more compelling.  However, widely-held myths about the effectiveness of network and application-specific controls need to be dealt with if broader IAM implementations are to be approved, funded and supported.

Mike

Identity in health services delivery

I came across this interesting headline a while back: Healthcare Identity Management Is Necessary First Step to Electronic Health Record Interchange.  The article has a link to a briefing from the Smart Card Alliance.  While this is an American organization, the context is similar enough to the Canadian identity management issues in health service delivery.  The briefing has a fascinating statistic:

More than 195,000 deaths occur in the United States because of medical error, with 10 out of 17 medical error deaths due to “wrong patient errors.”

The implication is that a large number of lives could be saved by improving identity management, and the brief’s answer to this is, predictiably, to implement smart cards.  The Smart Card Alliance feels that all citizens and health services professionals should be issued cards.

First of all, I think that smart card technology is very well suited to high-value transactions like those carried out in the health sector.  The form factor is convenient, the technology is robust and there are some good solutions out there that make session switching on shared computers fast and easy.

But in health service delivery, particularly in Canada, the identity and access issues are not the real concerns for physicians — whether I am insured or not, whether my health information is online or not, the physician’s job is to diagnose and treat.  This is particularly true in acute care situations such as emergency.  My understanding is that doctors and nurses in those environments are reluctant to use systems if those systems get in the way of treating sick or injured people, even if those systems contain in-depth medical records. Implementations of smart cards — or any other technology — need to be slick and flexible to be adopted.

On the patient side, I agree that it makes sense for patients to access their own electronic health record over the Internet. Smart cards are touted by the Smart Card Alliance as being a secure solution for strong authentication over the web, and although I’m aware of some exploits, it is a proven technology. The issue then becomes how to provision all those millions of users.  There are certainly some good processes out there for identifying individuals, but because most of them require in-person registration (or some other form of corroboration) there is the question of cost.  I know in Alberta we have recently empowered our network of registry agents to perform eligibibility services (i.e. identification) for the health system and presumably this could be used to issue a smart card or other second-factor credential.

How would a fractured American system handle this provisioning?  Who would be responsible for issuance, changes, revokation, ruling on eligibility, etc.?  And who will pay for the smart cards and readers?

And what privacy issues would emerge from such a solution?  Keep in mind that a strong credential such as a health services smart card would have the potential to become a national credential for Americans. Identity fraudsters would seek out weak links in the on-boarding process to obtain this valuable credential. And future governments would have the ability to link medical and other records to a host of other databases…

Clearly, users would need to have clear rights and remedies, something that may be difficult in a country that does not have national privacy legislation.

It is a complicated topic, one that the Smart Card Alliance’s brief does not properly address in its zeal to promote their specific technology.

Mike

Cloud Computing: Schneier and Ranum weigh in

Unless you’ve been living in a cave over the past six months, you are probably aware that Cloud Computing is Next Big Thing.  Of course, it isn’t new or unique — it is a form of centralized computing and application delivery has existed since the first time-sharing systems emerged in the 60s.

But the big vendors need a story to push their products and services, and Cloud Computing is it for 2009. It isn’t suprising that the information security and privacy protection aspects of cloud computing are starting to get a lot of attention as well.

What are the risks? How secure is my data in the Cloud? What privacy protections can I rely on? Do you really trust your service provider?

Bruce Schneier and Marcus Ranum have a video from their Face-Off series that is well worth viewing for anyone looking to take advantage of Cloud Computing services.

I like Ranum’s emphasis on limited data access and lack of portability. Locking clients into a hosted application and database is going to be a problem when the client wants to use another provider. Just how do you move five years of email from Gmail to your own mail server? Can you quickly extract and replatform your critical sales data from Salesforce.com if Salesforce gets bought out by one of your competitors?

Mike

Telus/Rotman IT Security Survey

 

Last year I commented on an excellent survey of IT Security practices that was conducted by Telus and the Rotman School of Management at the University of Toronto.  The survey for 2009 is now online.

Some interesting findings from 2008:

  • 4 percent of government organizations reported financial data loss due to information security breaches
  • 1 in 11 government organizations have lost confidential data
  • IT security investments directly impact (reduce) security incident reports
  • breach costs average 23 percent higher in Canada vs US

If you are involved in information security in a Canadian public organization or private-sector company, please click here and fill out the survey.  Your information will be help to provide a complete picture of information security practices in Canada.

Mike

Protecting Sessions with Presence Detection

One of the more difficult aspects of implementing Identity & Access Management solutions is properly managing the security of a user session after authentication. Traditional systems rely on things like timeout counters to determine when a user has stopped using their computer.  Once a timeout occurs, the operating system or application will force a logout to occur.

On even the most cursory review, it is easy to see how this approach is flawed. If the timeout is too long, a malicious user could simply wait until the authorized user has left their computer and take over the session. Typical timeouts are 20 minutes which gives an interloper plenty of time to gain access to a neglected session.  Solving this problem by shortening the timeout can cause users to login more frequently, impacting productivity and creating frustration.

The response to this is to train users to logout when they leave their computer unattended.  But, this can be inconvenient and unproductive, particularly in demanding environments where time — even a few seconds — is at a premium.

What if a user’s presence in front of a computer could be detected and, when he/she is no longer present, have the session lock automatically?  This is the premise behind Viion Systems‘ Sentinal Sign-Off solution.

Using standard web cams the system automatically scans and detects a user’s facial features. Once authenticated, the camera will track those unique features and automatically lock the session when the user is no longer present. Upon the user’s return, the Sentinal software will detect the correct facial image and the session is automatically unlocked.

Viion call this ‘Active Presence and Identification’ technology and it is specifically designed for those situations where highly sensitive data is accessed and where a short timeout configuration would introduce unacceptable inconveniences.

It can also be used in organizations that want to prevent users from sharing a session.  For example, two users at a counter service may have one computer to share. In many cases, they will simply leave their session running while a co-worker accesses applications under the first user’s credentials. Obviously, this practice would not be compliant with the security policy at many organizations.  The Sentinal Sign-Off system can eliminate this weakness.

It is worth noting that the system does not need to store the facial image or video data.  It establishes a link between the session and the individual for as long as the session exists, then discards the data. User privacy concerns should not be an issue with most implementations.  (It does have the capability of recording and storing images, but this is not required for the solution to work.  The recording of images feature can be used for specific security cases that clients may have.)

I had a demo of Sentinal Sign-Off system last week, and can confirm it operates as advertised.  One handy feature is that when it ‘loses sight’ of the user, it will show a countdown.  When you return to the field of view, the countdown stops.  The demo showed how a Sentinal login screen is displayed after session locking, but the company assures me that it can pop up a Windows or application login screen if required.

Viion’s system isn’t the only method that can be used to meet this business needs. Sonar, proximity devices and pressure mats all offer similar capabilities — but each has its own limitations.  For example, sonar cannot distiguish between people and inanimate objects.

Aside from some great user convenience, this looks like a solid session management system for niche business needs.  I can see it working well with health services applications, financial systems and certain operations consoles where there is a demand high security while maximizing user convenience.

Mike

Passport Canada’s Retreat

Today’s Globe and Mail brought news of Passport Canada’s decision to abruptly cancel its online passport application system.  The online service allowed Canadians to fill out their passport application online, and was launched four years ago as a progressive example of e-government.

The reasons for removing the service was a lack of ‘convenience’ for passport applicants, and passwords were cited as examples of that inconvenience.  The system is being replaced by online forms that don’t require a user account and password to be used.  Presumably a user will now need to fill in the form on a web page, then print and bring the form into a Passport Canada office for processing.

Of course, Passport Canada has been under attack by the Canadian privacy commissioner and had an embarrassing security breach in late 2007.  The claim that the service was inconvenient due to the need for users to remember passwords is a bit suspicious — by this logic, we’d have wholesale dismantling of online government services and the requisite hiring frenzy to replace them with counter representatives…

A better explanation is that the agency clearly has decided that the risks of making passport data available on the web has exceeded the organization’s tolerance levels. And good for them.  Until they are able to deliver highly secured system, or reduce the amount of data accessible online, the passport application should be removed.

Mike

PS2009 — Justin Somaini, Symantec

Feb 3rd, 1:30pm
Live blog post…

Justin Somaini’s talk was on information security in turbulent times:
– 70 percent of malware is targeting sensitive information
– 10,000 to 20,000 virus signatures created each DAY (up from 1,000 per week only 4 years ago…)

Threats are increasing. With security budgets likely to drop during the recession, can we find other ways to educate and motivate employees and executives?

The image of information security people is negative, making communication difficult.between IT and business. What is needed is a strong 2-way conversation to improve relationships. Mr. Somaini’s experience is that the relationsip is key to gaining trust between the two groups. In Symantec, he has observed a significant increase in the reporting of security incidents immediately after collaborative visits with business users.

The point of the talk is that fear can’t be used to change behavior — information sharing and relationship building are the keys.  Less policing, more discussion…

Mike

PS2009 — Telus/Rotman IT Security Study

Feb 3rd, 10:10am
Live blog post…

Alan LeFort from Telus presented on this Canadian IT security practices survey and study:
– 60 percent of Gov’t don’t enforce their security strategy
– 4 percent of Gov’t orgs reported financial data loss
– 1 in 11 have lost confidential data
– private organizations almost 3 times more likely than Gov’t to communicate security issues with stakeholders
– IT security investments directly impact (reduce) security incident reports
– Gov’t strong in network security, weak in application security (e.g. lack of strong authentication)
– breach costs average 23 percent higher in Canada vs US
– private sector paying 35 to 40 percent higher salaries for security staff

The 2009 study will target 800 respondants (up from 306 in ’08). Currently looking for input to survey design — Google ‘Rotman Telus Security Survey’ to find site.

Mike

10th Annual Privacy and Security Conference

I’m back in Victoria, British Columbia this week for what is becoming an annual event for me — the Privacy and Security Conference sponsored by the BC Government.  I like this conference because it has a public-sector flavour to it; the speakers and attendees see the same challenges in their work as I do.

The plan is to produce a post or two each day but we’ll see how it goes…

Mike