The Wayback Machine - https://web.archive.org/web/20090131065349/http://srmsblog.burtongroup.com:80/

January 29, 2009

DLP: Sharks in the water, clouds on the horizon

[blogger:Trent Henry]

Data loss prevention, data leakage protection, (digital light processing?)…   Whichever your preferred expansion of the DLP acronym, there’s no denying that it’s been a wild two-year ride. When Burton Group first started tracking the DLP space, we observed a smattering of vendors playing in network-specific or host-specific detection of sensitive data flows. The best tools generally came out of semantic analysis projects that were part of Ph.D. programs at various research universities. Innovative entrepreneurs saw value in the technologies and formed start-up companies around this core. Other solutions played tangentially: they typically operated as device-control agents, preventing unauthorized use of USB-connected flash drives, iPods, or other removable media. Seldom were they actually content-aware. However, with increased maturity even these solutions have added considerable language-analytic capabilities. The field of independent DLP vendors subsequently became quite crowded.

That, of course, has changed considerably. DLP is disappearing as a standalone feature or product. Instead, it’s becoming part of a broader information-centric security suite. If there’s any doubt of that, have a look:

DLP_fish_

There are some notable absences from the above feeding frenzy. Oracle, IBM, and Microsoft have each made significant security investments in other tools, but none of them has snapped up the DLP capability. Although Microsoft recently announced a partnership with RSA, there’s generally a noticeable gap in the feature set of these ostensible “sharks in the water.”

DLP_fish2_

Whether these large players will acquire the remaining DLP companies (Fidelis, Vericept, Verdasys, or Code Green) remains to be seen. But it’s going to be an uphill battle for standalone vendors to persuade the market that they have compelling advantages in the face of economies of scale. Each of the acquisitions thus far has made a great deal of sense by promising to couple DLP with desirable enterprise features: broader centralized policy management, endpoint protection agents, encryption, and content management.

Bringing together content management and DLP is no minor advantage. Over the last 18 months, Burton Group’s client Dialogues have shifted considerably from concerns about data in motion to those about data at rest. That is, security teams are striving to know where sensitive data lay across the enterprise. In part this is due to PCI requirements for protecting cardholder data. And in part it’s due to eDiscovery requirements for finding electronically stored information. Whichever the case, organizations need to stretch their security dollars, so they look for a tool that can provide both protective and discovery features. DLP products have plenty of shortcomings and room for improvement, to be sure. But they are tackling the right problems.

Just as we’re in the midst of DLP acquisition and integration, however, significant changes are at hand.

Yes, cloud computing admittedly brings a number of opportunities to IT teams and vendors alike. It makes IT costs more predictable, allows teams to focus on “core competencies,” and reduces the risks of technology obsolescence. But a security practitioner has to emote a certain degree of pessimism. Although the deployment of a DLP solution can help locate data in storage and constrain data in flight, when data moves beyond the enterprise perimeter, such tools no longer have effect. And the number of content and collaboration products appearing in the cloud continues to grow. Application infrastructure, storage, backup, and other services currently housed in the enterprise will soon have counterparts on the net – along with attendant sensitive data.

This means that DLP must grow with the cloud. Once again, acquisitions should help. As large vendors host data via software-as-a-service (SaaS) and other cloud-related offerings, they should consider the use of DLP tools to protect said data. This could be an additional service provided to customers, or it could be part of the core offering. Integrating with the customer’s enterprise DLP solution and policies would be the ultimate goal. But in the meantime, there’s plenty of opportunity to plan for making the cloud a safer place by meaningfully adding DLP. Enterprises themselves should consider how sensitive information would be controlled when moving to the cloud. Many cloud vendors—such as infrastructure-as-a-service (IaaS) providers—are unlikely ever to fray tight margins by adding DLP. Thus, organizations will need to consider administrative controls (e.g. contracts terms and audits) or alternative technologies (e.g. encryption and enterprise digital rights management) to combat data-loss storm clouds.

DLP_cloud2

January 23, 2009

Consumerization, the White House, and Rockin’ IT

Blogger: Dan Blum


Obama’s White House staff is the latest poster child for consumerization. As described in the Washington Post article Staff Finds White House in the Technological Dark Ages, Obama officials fresh from a campaign of “relentless social networking” finally arrived at the White House, only to “encounter a jumble of disconnected phone lines, old computer software, and security regulations forbidding outside e-mail accounts.”


Consumerization may be a new buzzword, but it’s a well-established phenomena. The “PC revolution” and the “Internet revolution” both rocked IT in the 1980s and 1990s respectively. Many organizations with their heads in the sand suffered badly. Organizations that hampered deployment lost the initial opportunities to fully leverage these productivity-enhancing tools. Organizations that failed to proactively direct deployment ended up with an unmanageable, insecure mess.


Because the consumer market is now much larger than the enterprise market, consumerization will only increase. We’re headed back to the future with:

  • Consumer applications such as social networks and other “Web 2.0” technologies
  • Consumer smartphones such as iPhone
  • User-procured and user-managed computers

It’s time for IT to ride this tiger. Let the Obamas in your organizations use smartphones, let the David Plouffes find fans on Facebook, consider useful apps like Salesforce and all the rest, BUT YOU MUST MANAGE IT. The following are some things to look out for.


As I wrote in iPhone and iTunes: The Thin Edge of Consumerization’s Wedge, consumer applications are not designed for enterprise class security and manageability. They may have vulnerabilities that put the organization’s data at risk. Organizations need to do risk analysis to determine how to manage these vulnerabilities, and push the vendors to cover or eliminate them. It’s also important to develop policy on use control and promote user awareness of the policy.  This can be a cultural issue for users of social networks like Facebook and MySpace that promote a great deal of personal openness that may or may not be appropriate for organizational purposes.


Archival is one of the more difficult deficits of consumer applications and smartphones. Vendors are getting better at helping organizations archive organizational email, but not web mail, text messages, Facebook posts, etc.  Even Obama’s BlackBerry – generally speaking, an enterprise-class device - may not comply off the shelf with laws requiring archival of all White House communications.


User-procured and user-managed computers, however, pose the most difficult consumerization dilemna. Just as de-perimeterization forces IT to shift security features from the network to the endpoint, the business takes control of the endpoint away...


Promoted in the name of cost savings, the idea of user-owned and managed PCs seem like a really bad one from the security perspective. But is it really? IT was already granting access to all kinds of contractors, outsourcers and external partners with unmanaged PCs. “Externalization” is what they’re calling that now, but its been going on for years. Whether we externalize for employees’ or for partners’ sake, we have to manage the risk of unmanaged PCs. Typically this means that other IT assets must be hardened and able to defend themselves. It’s a tricky thing. Security architectures may shift some security features from the network to the endpoint, but application and data architectures must trust the endpoint less.

 
My Shifting Defenses: Security Futures for Networks, Applications, and Data report considers unmanaged computers and other implications of de-perimeterization and externalization. This document is available to our subscribers or to anyone who registers at the guest link here. And, at the 2009 Catalyst Conference, we’ll be covering topics like desktop virtualization and other technologies that are gradually maturing to the point where they can really make your IT department rock.

January 15, 2009

To Err is Human

Blogger: Eric Maiwald

I was reading an article this week about the “hacking” of Intel’s Trusted Execution Technology. For some reason I was not surprised. Then today, I saw “Experts Reveal 25 Coding Errors That Let In Hackers.” The quote that I found interesting in this article was this “The Sans Institute said it was shocking that most of these common security errors are not understood by programmers.” I have to admit that I don’t find that shocking at all (whether the issue is the programmers making the mistakes or whether the problem is that their training was insufficient). What I’m shocked about is that security folks are shocked about it!

Humans are imperfect. We make mistakes all the time. If I ever forget that part I just have to type a few words and watch how many times I need to backspace to correct a typing error. We make mistakes when we design and build systems and products. Sometimes the mistakes are obvious (like when I type “dhe” instead of “the”) and sometimes the mistakes are subtle and hard to find.

Since we make mistakes, we have learned to live with this fact. A lot of times, people who build complex systems understand the likelihood of a mistake and they either build in some type of verification step within the system or the process of building the system requires multiple reviews and tests before it is put into use. For systems that control processes where the consequences of something bad happening are high, we not only build in verification checks but we also monitor the system. Often we also have people monitoring the people monitoring the system!

So why is it news when we find an error, or a mistake, or a vulnerability in software or hardware products? Why don’t we just assume that there will be mistakes in what we do? I’m not suggesting that we stop testing products or performing code reviews but I think we need to realize that the product of an imperfect human is going to be itself imperfect.

How does this translate into security and risk management? Well, if we assume that there will be errors and vulnerabilities in products and systems, we do not rely on a single control to manage our risk. It really is that simple. Oh sure, there are low risk cases where it does not make sense to pay for extra controls but when we have systems whose compromise will impact the enterprise or potentially cause injury or death, it behooves us to implement defense in depth. We don’t assume one control will always work. We install multiple controls. We monitor the system so that we can identify problems and react accordingly.

Last year, I wrote a blog entry On Response that mentioned how hard prevention is to do. Our mistakes are what make prevention hard. We can’t possibly construct the perfect preventative mechanism so we have to include additional controls that detect when our preventative controls fail and allow us to respond. This is just the way things are in our imperfect world. Rather than being surprised when you read about the latest vulnerability or error, just look at it as another reason why we don’t rely on just a single control.

January 08, 2009

An unwelcome outsourcing surprise

Blogger: Ramon Krikken 

We keep hearing and writing about the ailing economy, lay-offs, and other bad news concerning vendors (and partners) we do business with, so we’re generally not all too surprised or worried. The news in late December that banking regulators allowed IndyMac bank to skirt regulations wasn’t necessarily surprising, but somewhat worrisome in terms of being able to accurately assess a company’s health. However, this week’s news about Satyam Computer Services cooking their books and overstating their cash balance by $1 billion marks a new high in “things you didn’t quite see coming.” For some it hits particularly close to home because it’s an IT consultancy.
 
When we talk about risk aggregation – when dependencies cause compounded risk – we often think of hierarchies where we depend on vendor A, vendor A on vendor B, and so on. It’s a great way to simplify the risk calculations, but it ignores the fact that there actually is a complex, intertwined vendor ecosystem. We often assume that our simplified calculations are good enough and the best we can do, but as we hand off ever more critical business operations to others we keep losing visibility, and thus lose accuracy in our assessment. Because we can’t check everything, we rely on third parties – audit firms and the like – to help fill in the details, but in the case of Satyam (audited by PricewaterhouseCoopers) this seems to have been ineffective.
 
To further compensate for we rely our yet another set of third parties to create regulations, audit compliance with those regulations, and to let us know when something is amiss. Different vendors, different industries, different geographies, different regulations, different uncertainties and risk – something that’s apparent from Satyam’s ability to cook the books unnoticed, which for many large IT consultancies would be difficult due to Sarbanes-Oxley. It’s almost like trying to make sense of structured investment vehicles (SIVs,) collateralized debt obligations (CDOs,) derivatives, and socio-political influences in “the global economy” … and see how well that worked out, despite regulation and purportedly careful analysis by financial analysts.

The short of it is that – like the economy and life itself –  we can’t map all dependencies, identify all weaknesses and threats, and take a purely proactive stance. Even when all indicators are green (and assuming you have some faith in these metrics to begin with) it is wise to accept that uncertainty exists and have that good old “plan B” as a backup, “plan C” in case plan B doesn’t work, and maybe even something all the way through “plan Z.”
 
Don’t get me wrong, there is nothing wrong with being proactive, but often the security world – in its infinite wisdom – often shifts focus from one extreme (old-school reactive security) to another (new world preventive security.) On top of that the current metrics we use for measuring “security” are not necessarily conducive to even moderately good (i.e. acceptable) predictions, but I’ll hold that thought for another blog post. The result is that we think we know what the ecosystem looks like, but don’t know how accurate we are, and so risk of instability ensues.
 
It’s precisely the unpredictability of it all – and it doesn’t take a financial or IT analyst to see that things have become more unpredictable – that necessitates some good reactive planning to deal with risks of instability. In the case of vendors, this means ensuring that business continuity and disaster recovery (BC/DR) plans are reviewed, adjusted, and tested more often in times like these. There is of course some preparation required to achieve this: business impact analyses (BIAs,) and careful design and testing of the BC/DR processes – including how to replace one vendor’s services with another’s – to name a few activities.


It is always a good idea to keep these plans up to date anyway, but when vendors are announcing cost-cuts left and right you know you need to be prepared. We should expect more events to the likes of Satyam and the Madoff investment scandal – less noticeable during an economic boom, a downturn is where many of these fraudulent schemes will have to come to light.
 
One of our 2009 themes is taking a fresh look at security programs, how security is organized within the business, and how we assess their efficacy. In light of economics, consumerization, and the cloud, to name a few, vendor management and BC/DR are sure to be as high on the priority list as ever.

December 30, 2008

Consumerization's Impact on IT : Risk of Vendor Lockin

Analyst Blogger: Trent Henry

Consumerization. It's an emerging IT trend that we're watching with great interest. Although it may not be consistently defined, at the very least consumerization is a blend of user-owned devices deployed in the enterprise (think iPhone) and IT departments handing out stipends for users to purchase their own work-related gear. The idea--no surprise--is to reduce cost and lower overhead. This has been going on for several years, but the newest crop of handheld devices and moves from the likes of Citrix (see "Citrix Tries BYOC") have energized discussion.

Last week I read the article, "Assembling the Android army: a sticky platform", and it made me realize there's an aspect of consumerization I haven't heard people discuss: vendor lock-in. When dealing with Burton Group's enterprise clients, we talk until blue in the face about reducing switching costs, rejecting proprietary protocols, using standards where possible, and so forth. However, consumer-facing technologies try their best to be sticky: keep consumers working on their platforms, applications, and environments; they generally make it as difficult as possible to change.

My colleague Jack Santos argues that lock-in and stickiness aren't quite the same thing, and I agree with him. Lock-in means there are clear economic consequences: penalties for switching, extra unexpected expenditures to get the same feature, etc. Stickiness means just that--a sticky affinity to the platform or software that captures your imagination. Similar functionality may even be delivered on another platform for free, but where a user first uses a compelling feature wins the day because there is not a sufficiently compelling reason to switch (even if there is no cost in doing so).

Whether it's stickiness or lock-in, however, consumer devices pose new challenges for enterprises, including difficulty in moving users from a problematic platform (for whatever reason) onto another one. Whatever the advantages of consumerization, this is a detractor that organizations need to keep front-of-mind.

As security practitioners, the last thing we want is for users to cling rigidly to software, devices, or environments that are known to be risky. If one day Microsoft makes an Xbox with a terminal services client, I don't necessarily want users to spend their corporate stipend on that device for corporate email access. But this is a potential logical conclusion for consumerization and the stickiness of platforms and user experience. Time to start writing some user device acceptance policies I do believe....

December 22, 2008

The New IT Reality, Security and the Role of Vendors


Bloggers this week: Eric Maiwald, Phil Schacter, Dan Blum, Trent Henry, Ramon Krikken

This week’s blog post features another discussion between Burton Group security analysts on the new IT reality and how this impacts security.

Eric kicked off the discussion with this thought-provoking list of questions for the team to consider.

 One question we have not asked is “who (which vendor) is best positioned to help our clients in the new IT reality?” The new reality I’m speaking of includes cloud computing, virtualization, consumerization of IT, etc. We know that things are changing and we also are guessing that the economy will accelerate some of the technology changes. We have themes that talk about security organization and about technology (ie. security for data in transit, data stored on mobile devices, and data that is transiently resident in virtualized and cloud-hosted IT infrastructures) but which of the many security vendors (or non-security vendors) is best positioned to help deal with this? And just as important – what makes that vendor or those vendors best positioned? Is it technology? Integration of technology? Services? Services and technology? Size? Stability? What about reach into non-security areas of IT?


I (Phil) piped up with the following comment. Even figuring out what attributes to look for in a cloud vendor, or the related discussion of what terms to look for in the SLA from your favorite cloud vendor will be new and somewhat controversial ground to consider. I recall a few years ago when Microsoft tried to offer Passport as an identity solution for the cloud, and of course it failed – perhaps because of the issue of placing trust in any vendor, especially a vendor that’s always been aggressive in the market, such as Microsoft.

Next, Dan entered the fray with an examination of IT’s shifting reality, and offering a series of insights on how we might measure vendors and their capabilities to help organizations deal with their IT challenges.

Is there a new IT reality? We’ve had clouds and ASPs and SLAs and revolutions against IT and downturns before. However, virtualization is more disruptive than anything I’ve seen since the web and WiFi which are still disrupting, and web services/SOA seems to at least moderately disruptive. Reality is always in the process of reinventing itself but perhaps we do have a bit more of a new reality in IT than usual. I’ll grant you that.

Ok, so you wish to assess which vendors will thrive (and enable customers to thrive) on this new IT reality? It occurs to me that the discussion of the vendors is premature, since we had not defined the yardsticks by which to assess them.

The yardstick might include measures of financial, organizational, market and technological strength for the vendors. In terms of technological strength the vendor must enable customers to provide:

  • Web-centric computing
  • User centric experiences
  • Mobile computing
  • Resource dynamism, and
  • Fine grained understanding and control of information

These are all things we could attempt to measure. And yet…the ticklish distinction between being some kind of IT vendor versus some kind security vendor (or both), the fact that customers are so different and have thousands of alternatives for how to protect their IT environments, and the long tail phenomena that keeps the market in a structural state of permanent disruption all suggest to me that it may be a futile exercise to try and assess the big vendors in security at this level of abstraction.

Trent was the next analyst to join the discussion.

When you used the word “disruptive,” Dan, it got me thinking. We could have an entire discussion on that alone. My own thought is that the introduction of the personal computer was the most disruptive thing I ever saw—a move from centralized to distributed computing paradigms. Ironically, virtualization plays a role in returning us to centralization once again.

Which vendors actually introduce disruptive technologies or “new ways to work?” It’s never the security vendors, IMHO. Often, technology vendors simply respond to innovation or changing business requirements in customer environments. And security vendors tend to follow even a step (or two, three) behind them. But elements of consumerization, cloud computing, new-fangled collaboration, etc are spearheaded by vendors on occasion. With investments in R&D labs to conduct fundamental research, Google, IBM, Microsoft, and others are not merely reacting to new trends but are creating the vision (and tools) to realize the future. How does this figure into an evaluation equation? (And what does it mean when former research heavies like DEC  or Xerox—whose PARC has been greatly minimized—are no longer as important?)

Ultimately, this discussion has to rise above just security unless we narrow the question to “How are vendors going to successfully help organizations protect information and infrastructure?” But ultimately, that’s just one piece of the larger, “How can IT enable business?” question.

As often seems to happen lately, the last word fell to Ramon.

I’m also inclined to think that the PC was most disruptive from a security perspective. While we used to have a centralized system with a few terminals and printers, the number of inputs and outputs, as well as the complexity and unpredictability of the environment greatly increased with large-scale adoption of distributed computing. And although many of the basic virtualization concepts really don’t differ much from how it was implemented on the mainframe, coupling it with distributed computing complicates things: yes, it does allow a certain amount of centralization on hardware, but increased mobility capabilities through connectivity and hardware in all forms will mean that from a risk perspective this will likely show up as a more decentralized, complex, and unpredictable environment (SaaS, cloud, mobile devices, etc.) It is a potential second wave of disruption to security, but this is one where we’re (security) actually involved in trying to shape the wave.

If vendors only respond to customer demand, then it makes sense that security vendors are at least second in line. After all, few security controls are used as a direct business tool. The exceptions are those systems used by security teams, who are in the business of providing security services (and so this is where, for example, innovation can happen in the ‘GRC’ market or in something like SIEM.) In most other cases I can think of, security controls are a constraint on ‘regular’ business processes, and security vendors will naturally have to respond to what their customers (the makers of the business tools) need.

The unpredictability of the future environment makes it incredibly difficult to assess which fundamental security technologies will work. What seems most difficult at this point is that we have a number of emerging concepts such as consumerization, the cloud, ubiquitous connectivity, and persistent storage, but we have no clue on how, for whom, and to what extent these are going to take off. All of these impact the level of control one has over the environment. The amount of control determines the ease of implementing the trusted path, and the trusted path determines on how well you can implement use control.

December 18, 2008

On the nature of perimeters and shifting defenses to endpoints and data

Analysts contributing to this blog post: Dan Blum, Eric Maiwald, and Phil Schacter


A recent TechTarget article prompted a discussion within the Burton Group security analyst team that we wanted to share with our blog readers. The discussion centers around the notion that network perimeters are losing their effectiveness as a primary enterprise defense mechanism, and this trend focuses more attention on securing desktop and other mobile devices that access protected IT assets.

Eric responded to the article with this statement:

The shifting of focus to the desktop security defense runs counter to the “consumerization of IT” trend that includes a strategy of allowing employees to bring their own computer to work. If we don't own the end point device, how do we enforce software/application/configuration controls on them? If we don't own the end point, I think that virtual desktop infrastructure (VDI) and information-based controls will have to be used and the perimeter shrinks even more (down to the data).

Continuing the thread and adding his experiences to the discussion, Dan added:

I wrote about the perimeter shrinking down to data 9 years ago in the first version of Burton Group’s “Securing the Virtual Enterprise” report. Then it was prognosticating on one possible and somewhat distant future -- now it is just incredibly hard. The recent partnership between Microsoft and EMC/RSA, is one of the first inroads into making this practical on a large scale. EMC/RSA will provide the data discovery and policy management; Microsoft will provide the policy enforcement point - just enterprise DRM at this point. Many more PEPs and actions are needed, and they've skirted around interoperability by not working the standards angle on a policy language (when I asked them about policy language and classification metadata standards they fell back on the self-serving and debatable proposition that the security market is consolidating).

On a related subject, I had an interesting discussion with one of Burton Group’s customers yesterday. This customer is among the vanguard on expanding the scope of unified endpoint security. Whereas I forecast a unified endpoint anti-malware suite in 2006, I underestimated the speed at which unified endpoint protection would come to embody not only comprehensive anti-malware and NAC but also device control, drive encryption, and DLP. These are the requirements that will be in this customer’s forthcoming RFP. One of the key features on this customer’s wish list is a single management console that integrates all of the endpoint’s defense mechanisms. The customer recognizes that current integration is spotty and functionality is immature, but wants to put the RFP out there, see how far they can push the envelope, see how the vendors stack up, and what tradeoffs they should make.

I pointed out that tradeoffs - such as do you pick McAfee for its endpoint DLP integration or Symantec for its dis-integrated but market leading Vontu DLP - would be best informed by a security architecture and migration strategy that put some stakes in the ground.

I also pointed out there are some limitations - even with all the heavy desktop security there will still be leaks; for example, Cisco's endpoint DLP searches for strings like credit cards but a malicious advisor will soon learn to hide the telltales once he understands DLP is resident. The customer is planning a very heavy dose of endpoint protection that will be low surety and expensive to maintain, whereas a locked down desktop plus VPN backhaul will buy you more protection than all the third party add-ons in the world.

The customer agreed, and said they could lockdown and backhaul the corporate desktop, but that half of their desktops belong to "independent contractors." These contractors own their desktop, have admin rights, and are only contractually required to put the customer’s endpoint security software on the desktop. They can't lockdown or backhaul these desktops; network protections won't cover them on the road.

I noted they might benefit from an information-centric security strategy, which should include corralling the data in centrally controlled repositories through the use of terminal services and applications that handle it remotely. Then the DLP, device control, and encryption mechanisms become less critical although protection against keyloggers and screen capture is still important.

What if the user is offline, the customer countered? I pointed out that for a three year architecture plan it may now be more reasonable to assume users are almost-always-connected. Did you see the commercial where AT&T "found the Internet" in the Himalayas (land of the abominable snowman?)

The customer agreed that users might be almost-always-connected, but pointed out that their independent advisors own their contact lists and if they want to have them on their endpoints then this is permitted within their contract. Since salesmen cling to their contact lists like NRA diehards to their guns, they aren't likely to give up control over this data. All that the customer can do is try to restrict them to only their own data and help them not do anything stupid with it.

We didn't actually get to discussing VDI, which you mentioned. Sometimes I've brought that up on other calls, but only to say that I'm not yet sure its ready for prime time - the surety, compatibility, reliability, and performance of the user experience are uncertain but what is certain is the requirement for lots of expensive servers and storage back in the data center where organizations must still maintain a real perimeter.

Bottom line - perimeter around the data is low surety, high cost and fraught with problems. The customer should be able to apply a calibrated set of information-centric, locked down desktop, network, heavy endpoint security controls so as to optimize the defense in depth that's necessary for different situations and make a more informed procurement tradeoff.


And at that point the discussion ended with Dan getting in the last word, although I’m sure that the team will continue to delve into the issues raised in our research and writing in the coming months and years.

December 08, 2008

iPhone and iTunes: The Thin Edge of Consumerization’s Wedge

DanBlumSmall Blogger: Dan Blum

You may have thought iTunes was just a music program for Macs, Windows, and iPhone systems but lately we’re hearing questions about whether iTunes is required on enterprise desktops. Behind this interest is Apple’s use of iTunes on the desktop to deliver updates to the popular iPhone. With iPhone now connecting to organizational mail systems such as Microsoft Exchange, iTunes has come to have a business purpose.

Absent the iPhone update requirement, organizations would probably want to discourage iTunes deployment on business systems and networks for the following reasons:

1) iTunes expands workstation attack surfaces
    a. The program has 13 vulnerabilities listed on the National Vulnerability Database dating from 2005 and 2008; that versions before 6.0.5.20 did not verify authenticity of updates raises particular concerns about whether security was even considered in the program’s original design
    b. The Bonjour service advertisement protocol that iTunes uses could also be used by a compromised system as an attack vector against other LAN-connected systems
2) Malware could be introduced through iTunes, especially on Windows systems
3) iTunes may be used to facilitate copyright violations by sharing unlicensed music or content over LANs, raising liability issues for the organization
4) iTunes is not an enterprise product – it has no enterprise management features that might, for example, be used to disable every function that was not business relevant (potentially everything except for iPhone update); thus organizations are stuck with the whole ball of wax if they allow iTunes to be deployed

Considering the requirement to support iPhone – a useful device that may be extremely popular for large elements of the organization’s workforce – organizations have the following alternatives.

1) Put iTunes on the organization’s standard desktop and (as with everything else) try to mitigate the risk through third party patch management, anti-malware,  intrusion prevention, and other security products
2) Allow individual workers to put iTunes on the organization-owned computers that have been issued to them, and provide users with general education on basic endpoint security concepts such as patching systems, not working in the admin account on a daily basis, and maintaining anti-malware software
3) Ban iTunes but allow end users to update their iPhone from the iTunes on their home computers, or not update their iPhone
4) Ban iTunes and de-authorize iPhone for the organization’s data communications because it cannot be properly updated

iPhone and iTunes - the thin edge of a consumerization wedge - may be just one of the first consumer applications forced on the enterprise. iPhone and other mobile devices are providing a platform for social networking and other chatty applications. And with Apple’s Application Store bursting at the seams, there is more to mobile system update than meets the eye.

Regardless of the choice you make on iTunes today, it is a good idea to push Apple and other vendors to furnish simple, locked down enterprise management utilities to update organization-approved smart phones and the applications that run on them. Engage in a conversation about how we get out in front of consumerization with other organizations in your industry as well as with the vendors. There should be plenty of opportunities for vendors to grow their market share, improve protection, and provide for basic security management needs when their technologies come into business use.

December 01, 2008

Security on the Move

Eric_Maiwald_newblog





Blogger: Eric Maiwald

We are in a time of rapid change – of course this is not news to anyone working in IT. Virtualized environments, cloud computing, software as a service, and mobile workers have changed much of what was normal in the world of IT. If these things haven’t reached you yet, they will soon as the economic downturn forces executives to look for ways to cut costs.

There is one thing that all of these technologies and trends have in common – information or data is moving. Our information is no longer safely locked away in a database on a huge mainframe in a physically secure data center some place. Instead, the information is moving from server to server, data center to data center, and vendor to vendor. Even our own employees are moving information all over the place as they extract information into spreadsheets and store it on local hard drives, USB sticks, and handheld devices. All this mobility is enough to give a security guy the shakes.

Let’s take a quick look at the major new technologies and trends and see what can help:

Virtualization
Virtualization means that applications can be placed on different physical hardware so as to utilize the hardware more efficiently. Specific applications will not live on specific servers any longer. Moving applications around will impact network zoning and other static controls. We can look for security tools that live within the virtual environments but they are only beginning to appear. An alternative is to package some controls with the application (make them a part of the virtual environment that moves with the application). Controls such as host intrusion prevention might help here. Process and procedures may also help. Define risk levels or control requirements for each application and use that criterion as the basis for determining which physical machines are appropriate for different applications.

Cloud Computing
Cloud computing encompasses a lot of things including hosting services and SaaS (I’ll deal with SaaS in a moment). If servers and applications are hosted at someone else’s data center you may not be able to install all of the network controls that you have at your own data center. So here again, moving the controls into the server (or virtual machine or the part of the application that you control) may alleviate some of the problems. Take for example, web application firewalls (WAFs) – you may not be able to deploy a WAF in front of your servers at a hosting facility. If you need the WAF functions, you might look for vendors offering software solutions that load on to the server rather than residing in a separate appliance. Contracts and SLAs are also important if your enterprise is considering hosting facilities. Make sure you check on what they are really providing and work with your legal department to include the necessary language in your contracts.

Software as a Service
SaaS is sometimes considered part of cloud computing but I wanted to call it out separately as there are some unique aspects to SaaS. The biggest issue is that you will lose all management over technical controls. You will not be in charge of firewalls, IDS/IPS, web filtering, or any other security device on the vendor’s network. At the same time, all of your data will be under the control of the vendor and its employees. So what can you do? There are three big things that can be done. First, before the vendor is chosen and the contract is signed, check out the vendor. Look to see what controls are in place and what control standards the vendor is using. Verify that the controls you’re using are appropriate to protect your data. Second, have a long talk with your legal department and make them aware of the necessary protections and the risks of a breach. See if they can negotiate with the vendor regarding the right to audit the vendor. Third, once the contract is signed, do the follow up. Audit the vendor periodically. Check on what they’re doing to make sure your information is protected.

Mobile Workers
Employees are working on the road, from home, and from coffee shops. Information is stored on laptops, USB sticks, and handheld computers. You may not even know where the information is actually going as employees may put it on their home machines or personal smartphones. Any of these devices can be lost, stolen, or just given away. For computers and devices that are owned by the enterprise, use proper protection. That means use a VPN, system firewall, and malicious software controls. Try to manage the systems properly so that they are patched and that unnecessary applications are limited. For some devices, you can install a remote erase function that will remove all data if the device or computer does not check in for a certain amount of time (note that this works better on handhelds than on laptops). You can also use encrypting USB sticks that require a password to access the data on the stick (hey even a short password is better than nothing!). If your employees are going to use non-enterprise devices you can set up terminal servers so they can access their desktops (and sensitive information) without having to store too much on the local machine. This also gives you some control over what can be copied to the local machine. When you have employees that need information on non-enterprise machines that will not have reliable network connectivity, you may need to apply controls to the information itself in the form of enterprise rights management.

That was a very quick look at some of the major trends in today’s IT. All of the controls I mentioned need to be considered in the context of the larger IT environment. In other words, do your tradeoffs and identify the risks that you can accept and those that you cannot. Try to mitigate the risks that you can’t accept. Talk to the business. Talk to the other parts of IT as some of the suggestions that I made will have big impacts on networks and servers. You can’t turn back the tide but you can work with it.

November 24, 2008

Government Plans Top Secret HSPD-23 Program for Enhancing Information Assurance


Blogger: Doug Simmons

This week I attended the “Information Assurance and Enabling Identity Management – Security 2008” conference. In light of Burton Group’s research plans to emphasize “Critical infrastructure protection and process networks” as a theme in 2009, I was very interested in the keynote address. The  keynote speaker was Steve Chabinsky, Deputy Director, Office of the Director of National Intelligence. There were about 200-250 people in attendance.


Some of Mr. Chabinsky’s more compelling comments were that he believes we “as a nation” have been seduced by technology. This has led us to become lazy, weak and vulnerable. It appears that our “economic supremacy” relies on untrustworthy technology, and that technologies have not kept pace with the threat.  As a result, the U.S. facing a grave economic and security challenge from a growing array of actors, including well resourced and persistent adversaries.  We have “weak situational awareness.” We either change the path that we’re on “or we lose.”
Mr. Chabinsky then briefed the audience on the Comprehensive National Cyber Security Initiative (CNCSI) – HSPD-23. This directive is classified at top secret level, but calls for a national priority and plan for action. The directive considers the full spectrum of threat vectors - network, supply chain, vendor, mission bridge networks - to address threats - both insider and external.
In brief, HSPD-23 has 12 initiatives:


1. Reduce government portals connected to the Internet to less than 100. Currently there are 4.500 portal connections to the Internet. A consolidation effort is planned, and  the end result will be a single, integrated line of defense to government networks.
2.  Deploy an intrusion detection system called Einstein II across the civilian-supported networks. This does not include intrusion prevention and is dependent on initiative 1 above.
3. Deploy an intrusion prevention system called  Einstein III, which will block or mitigate intrusions.
4. Coordinate and redirecting government funded R&D for cyber activities, possibly through a CTO-level Federal position.
5. Connect current cyber operational centers to share malicious activity information, in order to have an understanding of the entire threat. Mission bridging – leveraging and sharing of cyber defense information across agencies. Shared standards and procedures.
6. Define a government cyber counter intelligence plan.
7. Increase security of classified networks.
8. Expand cyber education. Academic programs teaching techniques and tools to all agencies, encouraging best practices. Even goes to civilian education, K-12, etc.
9. Define leap-ahead security strategies and programs. Get ahead of the bad guys, don’t just play catch up. Look at newer technologies.
10. Define and develop enduring deterrent strategies and programs. Group to be populated by a broad group of experts.
11. Develop a multi-pronged approach for global supply chain risk management.  This is perhaps the most challenging of the initiatives. Threats include counterfeit hardware and software provided by small and large suppliers from around the world. Supply chain and risk management standards are necessary.
12. Extend cyber security into critical private domains. Emphasis on getting the government “act in order”, then working with private sector to coordinate dialogue and approaches on cyber security.


Funding is being considered. And the “powers” behind the initiative are meeting almost daily with the executive and legislative branch to gain the appropriate funding for these initiatives. Mr. Chabinsky is pretty optimistic that the appropriate funding will be found despite the current wars and state of the economy.


This initiative, of course, opens up a whole host of issues and concerns about the Federal government’s ability to “get its act together” any time soon – before a significant, “world-changing” breach occurs. Coupled with this concern is that of the protection of U.S. citizens’ civil liberties. What will the over-arching security measures dictate with respect to “national security” at the expense of personal privacy? These are not new questions, but the fact that the directive is gaining so much attention, while remaining top secret, leaves a lot of room for further investigation and analysis by companies such as Burton Group.


 

Categories

Blog powered by TypePad