Friday, July 18, 2008

NebuAd hauled down to Capitol Hill

WASHINGTON -(Dow Jones)- The chief executive of an Internet advertising start- up admitted Thursday that his firm could track peoples' activity on multiple Web sites without their express permission.
NebuAd CEO Robert Dykes said at a House hearing that the Internet service providers with which his company partners send their customers letters 30 days before any tracking begins.
The letters, which Dykes described as "robust," tell subscribers how they can opt out of the monitoring. If the customer doesn't respond, however, NebuAd begins collecting data on their browsing activities to offer ads relevant to their interests.

Yep - you can opt out alright, then they will place a cookie in your browser that will prevent their technology from performing its dastardly deeds. Of course if you ever get rid of that cookie, you are right back in the mix.

Thursday, July 17, 2008

Another ISP caught with their NebuAd down

The Washington Post is reporting a story today that Embarq, an ISP in Kansas has been using the same technology that Charter Communications was pressured to drop just weeks ago. Apparently Embarq didn't tell their subscribers about this, and Congress is fast considering this and similar uses of deep packet inspection technology to perform these types of actions wiretapping. They have a link right on their home page listed "your privacy rights", but apparently that is just for show.

This is the first of their "five privacy principles" that apparently allow them to look at where their customers travel on the web.

EMBARQ creates, obtains, and uses your personal information to provide you the products and services you order, and to present you with product and service offerings that we believe may interest you

This is their definition of personal information.

Personal information is information that is directly associated with a specific person such as his or her name, address, telephone number, e-mail address, activities, and personal preferences.

We collect personal information about users of our products or services in the normal course of our business. This is how we know where to send you a bill for service...

Or an ad for something you may want.

All of their references to cookies in the privacy policy relate to cookies on the Embarq site, and mention nothing of the types of cookies used in the NebuAd realm. I am usually the last one to think that congress does much effectively, but perhaps this will be the nail in the coffin of this technology and stop other ISP's from getting the same idea. These types of privacy mishaps where companies had a privacy policy not outlining these types of activities, but thought it was their god given right to do so have gotten several companies in trouble with the FTC for unfair and deceptive trade practices, and a nice visit form the auditors biannually for the next 20 years. I'm hoping Embarq is next, and it serves as a lesson to other ISPs.

Wednesday, July 16, 2008

Verizon data breach report

Verizon Small Business Services released a study of more than 500 data breaches that they were consulted in from 2004 to 2007 entitled 2008 DATA BREACH INVESTIGATIONS REPORT. Whether you can take the findings in here to be an accurate barometer of the state of information security or not is up to you, but I am much more impressed with this report, that anything else I have seen over the last several years. The sample size is a little small, and I have some concern over the source, since I didn’t even know Verizon did this type of consulting. So I am either out of touch, or other people in similar size organizations don’t know this fact either. They did report that 26% of the companies in the survey had at least 1001 employees. The actual segment states 1001-10,000, but since that is a little broad, I will go with my previous assessment and assume most of them are on the low end of this range.
What this report tells us about the state of Information security is that what we are doing, and where we are spending our money is not working, and that although you can write all of the policies you want, you had better make damn sure someone is reading them and following them. Now for my rambling thoughts and highlights of the report.

The report covered a period of 2004 to 2007 and covers 500 incidents. Of course these are only public incidents, and are only incidents where Verizon was involved, so all assumptions are made knowing that this may or may not be representative of the population as a whole.

26% were at organizations from 1001-10,000 employees
22% had 101-1,000 employees, and 30% had 11 to 100 employees

73% of the cases involved external parties as the source of the attack, 39% were the result of partners, and 18% was the result of an insider. The numbers are somewhat skewed in that some cases were a combination of those three leading to a number higher than 100%. I would assume from that that 30% of the cases involved a combination of all three, if my logic and math is correct here. This seems to go against everything we have always heard about insiders causing the largest percentage of losses. I still believe this may hold true and is not reflected in a large number of reports because there is no evidence of insiders performing these breaches, and the companies may not even be aware of them. The more interesting statistic is the high number of breaches due to partners, and when the report looked at the number of records compromised with the probability of compromise, the partners were the biggest risk here, although closely followed by internal employees - 73 to 67 respectively. So we need to have more controls, contractual obligations and monitoring around partners, but the internal threat, when you look at the numbers of records compromised is still a big threat. When the report calculated the number of records compromised form external sources it was dramatically lower than for internal employees (30,000 vs. 187,500). And what was the source of the internal breaches you may ask. 50% of the breaches were caused by IT admins. For the partner breaches, the report stated

“Partner-side information assets and connections were compromised and used by an external entity to attack the victim’s systems in 57 percent of breaches involving a business partner. Though not a willing accomplice, the partner’s lax security practices—often outside the victim’s control—undeniably allow such attacks to take place.”

What is needed here is controls and monitoring of partners, but how much of this will be effective? Making them change their passwords every x days wouldn’t have helped (see previous entry on this from 7/15), and perhaps monitoring wouldn’t have caught this either. What about serious liability statements in the contract with monetary penalties? If vendors are on the hook for not only their losses, but yours as well, might we be able to prevent some of these? Another interesting statistic is that in 16% of the cases where partners were the source of the breach, the deliberate, malicious actions of remote IT administrators were the cause.

One of the most interesting results from the report for me is in figure 12.which illustrates patch availability at the time of the breach. In 71% of the cases, a patch was available for more than a year, and in none of the cases reviewed was there a breach where a patch was available for less than a month. In only 4% of the cases was a patch available for 1 to 3 months. So all of the meetings on the second Wednesday of the month to review the recent Microsoft patch releases are a waste of time, and we should instead be focusing on whether patches are being applied or not across the board. A simple calculation of vulnerability to host ratio will suffice here, and the number should be decreasing over time.

Another stat of note is the review of common attack pathways. In 42% of the breaches, remote access and control was the cause of the breach, and Internet facing systems were the pathway in 24% of the cases, so we again seem to be spending money and time on the wrong thing here. Simply changing vendor default passwords would have most likely prevented most of these attacks.

So once these companies were attacked it took 63% of them months to discover the breach, and 70% of the breaches were reported to them by third-parties, employees ranked second at finding them (12%), event monitoring (4%) and third-party audit (1%). Interestingly enough in 82% of the cases, the victim had the information to detect the breach, but weren’t monitoring the events carefully enough to find them. This tells me that we need to spend more money on security awareness, and develop the monitoring as well as the collection of logs. I would like to know how many of those companies had expensive SIM products in place.

So what can we glean from this report.

  1. We need to spend more on basic controls and procedures, and make sure they are being followed
  2. We need to make sure employees know where and how to report suspicious activity because it is cheaper than monitoring events and three times as effective
  3. If you do collect logs – have some automation in place to alert you
  4. Patch as much as you can across the board regardless of when the patch came out
  5. Watch your partner connections
  6. Know where the data is (66% had breaches of data they didn’t know was on the system)
  7. Ensure procedures and policies are being followedMonitor IT admin activity and ensure background checks are performed

Tuesday, July 15, 2008

Authentication in a B2B environment

I have had a running discussion with a colleague of mine concerning the use of password expiration for B2B sites. We are talking here about partner sites accessed through the Internet to some type of web portal. My argument was that for a B2B site without mission critical or very sensitive data, it does not make sense financially or from a security perspective to require the external organizations to change their passwords or force two-factor authentication. Let's step back for a minute not to the nine solutions, which are running through your head right now, bit to the question of what we are trying to prevent - the question.

We are trying to prevent one of the following:

  • An unauthorized person at the partner organization from accessing someone else's account and information, or performing some type of fraud with someone elses account.
  • An unauthorized reckless wrongdoer on the Internet from accessing the partner's information, or accessing the target system and doing bad things that cost money to either side.
  • Someone leaving partner firm A and going to work for partner firm B and still having their account active.

There are probably several I cannot think of right now, but let's concentrate on these for a moment, and break them down into several risks and address them.

Risk 1 - password cracking by someone internal or external to the partner firm guessing the password.

If the account locks after several attempts, then this will be largely ineffective, and if it doesn't they would be able to script a password cracker that would most likely break the password long before the password was due to change anyways. That is unless you are going to have them change their password daily (good luck with this one). But just use a token you say, then they wouldn't be able to break their password. This is certainly a good solution for very sensitive data, but for a large group of users it gets very expensive, and usually results in the token being taped to the monitor, perhaps with the code attached to it as well. Certificates are another solution, but are not as portable as most people would like and are a bitch to manage and revoke. The more you force a user, especially someone at an external firm who probably doesn't care much about their own company's security and even less about yours to change their password, the more likely the password will be written down on their monitor, or in the clear in a spreadsheet shared by everyone in the company.

Risk 2 - person leaves partner company A and goes to work for partner company B and their account is not disabled.

If the accounts are disabled after x days of not being used then this will have the same effect without causing heartache, and I would argue less security for the people that are logging in.

For what it is worth, here is my take on this, but I would appreciate any feedback on differing opinions.

  1. Force users to pick passwords that are forced to be complex (contain no dictionary words, have a suitable length, Upper/lowercase, special characters, etc.)
  2. Disable accounts that have not been logged in for more than x days
  3. Lock accounts after 5 invalid attempts
  4. Only use tokens for very sensitive sites with a smaller number of end users. Since two factor or ten factor authentication won't prevent the trojan installed on Partner A's system from letting the user logon and piggybacking on that connection the whole way to your site.
  5. Audit authentication events and trigger alerts on things like several invalid attempts for several accounts being accessed from the same source, or a large number of attempts in a short period of time

In my opinion we either need to force users to change their passwords hourly or not at all. Passwords are ineffective for truly good authentication, but the alternatives are either too expensive, too difficult to mange or both to make good business sense. Furthermore, the strong authentication solutions are ineffective against today's threat, and their advantages are often circumvented by end users due to their difficulties. We need to make it easy enough for users to use without needing to circumvent the controls in place, and use risk analysis to ensure we are placing controls at the correct points to mitigate the problems mentioned earlier. Additionally, when you look at the reports from the Anti-phishing Workgroup, the FFIEC mandates that went into effect at the end of 2006, that included amongst other items, strong authentication has had no effect on phishing.