Wednesday, 21 November 2012

Scientists Find Cheaper Way to Ensure Internet Security

Scientists Find Cheaper Way to Ensure Internet Security
Scientists at Toshiba and Cambridge University have perfected a technique that offers a less expensive way to ensure the security of the high-speed fiber optic cables that are the backbone of the modern Internet.
The research, which will be published Tuesday in the science journal Physical Review X, describes a technique for making infinitesimally short time measurements needed to capture pulses of quantum light hidden in streams of billions of photons transmitted each second in data networks. Scientists used an advanced photo detector to extract weak photons from the torrents of light pulses carried by fiber optic cables, making it possible to safely distribute secret keys necessary to scramble data over distances up to 56 miles.
Such data scrambling systems will most likely be used first for government communications systems for national security. But they will also be valuable for protecting financial data and ultimately all information transmitted over the Internet.
The approach is based on quantum physics, which offers the ability to exchange information in a way that the act of eavesdropping on the communication would be immediately apparent. The achievement requires the ability to reliably measure a remarkably small window of time to capture a pulse of light, in this case lasting just 50 picoseconds — the time it takes light to travel 15 millimeters.
The secure exchange of encryption keys used to scramble and unscramble data is one of the most vexing aspects of modern cryptography. Public key cryptography uses a key that is publicly distributed and a related secret key that is held privately, allowing two people who have never met physically to securely exchange information. But such systems have a number of vulnerabilities, including potentially to computers powerful enough to decode data protected by mathematical formulas.
If it is possible to reliably exchange secret keys, it is possible to use an encryption system known as a one-time pad, one of the most secure forms. Several commercially available quantum key distribution systems exist, but they rely on the necessity of transmitting the quantum key separately from communication data, frequently in a separate optical fiber, according to Andrew J. Shields, one of the authors of the paper and the assistant managing director for Toshiba Research Europe. This adds cost and complexity to the cryptography systems used to protect the high-speed information that flows over fiber optic networks.
Weaving quantum information into conventional networking data will lower the cost and simplify the task of coding and decoding the data, making quantum key distribution systems more attractive for commercial data networks, the authors said. Modern optical data networking systems increase capacity by transmitting multiple data streams simultaneously in different colors of light. The Toshiba-Cambridge system sends the quantum information over the same fiber, but isolates it in its own frequency.
“We can pick out the quantum photons from the scattered light using their expected arrival time at the detector,” Dr. Shields said. “The quantum signals hit the detector at precisely known times — everyone nanosecond, while the arrival time of the scattered light is random.”
Despite their ability to carry prodigious amounts of data, fiber-optic cables are also highly insecure. An eavesdropper needs only to bend a cable and expose the fiber, Dr. Shields said. It is then possible to capture light that leaks from the cable and convert it into digital ones and zeros.
“The laws of quantum physics tell us that if someone tries to measure those single photons, that measurement disturbs their state and it causes errors in the information carried by the single photon,” he said. “By measuring the error rate in the secret key, we can determine whether there has been any eavesdropping in the fiber and in that way directly test the secrecy of each key.”

PCI Compliance for Retailers from the Cloud Perspective

PCI Compliance for Retailers from the Cloud Perspective
One of the key drivers to IT security investment is compliance. Several industries are bound by various mandates that require certain transparencies and security features. They are designed to mitigate aspects of risk including maintaining the sacrosanctity of customer information, financial data and other proprietary information.
One such affected vertical is retail. No matter if you’re Wal-Mart or Nana’s Knitted Kittens, if you store customer information; if you process payments using customer’s credit cards, you are required by law to comply with a variety of security standards. Although there are several auditing agencies and mandating bodies, today we will concentrate on the one compliance agency that is typically applicable to every retailer-PCI.
PCI (Payment Card Industry) enforces Data Security Standards that looks to ensure that ALL companies that process, store or transmit credit card information maintain a secure environment. Now of course, not all merchants are created equal. Nana obviously doesn’t process the volume or the dollar amount of a national or even a high traffic regional retailer. However, this doesn’t let Nana off the hook. Her online shopping cart still needs to be Payment Application DSS validated (PCI compliant). She still is required to pass security audits of her network…just not as often.
But for the sake of this example, let’s assume you are a retailer who processes more than 20,000 transactions a year and the administrative burden of PCI is a real concern. In fact, it is a business necessity to maintain merchant accounts with VISA, American Express and MasterCard. And it is hugely important to keep the confidence of your customers. Fines for non-compliance aside, a breach of your network could cost millions of dollars. And that doesn’t begin to calculate the cost of customer defection through loss of trust.
Most, if not all, retailers have some sort of PCI monitoring in place. However, they are often cumbersome, expensive and resource heavy. Additionally, too many retail organizations don’t employ a compliance officer, much less a dedicated security person. This doesn’t mean these functions aren’t part of someone’s job description. Typically, they are yet another line item in a plethora of competing priorities and mission critical initiatives. In that security can be considered a cost center, the move to simply do the bare minimum to meet compliance is often an attractive alternative. Until now. Until the cloud. More specifically, a holistic enterprise security initiative deployed and managed from the cloud.
So how does cloud-based security/security-as-a-service meet the requirements of PCI while driving down costs, freeing up personnel resources and providing an easy-yet-comprehensive suite of capabilities and functions? The easiest way to illustrate the potential is to look at the individual PCI requirements and how they are addressed from the cloud:
1.    Protect Data: A cloud-based SIEM offering can accomplish the most important feature of this requirement: the ability to instantly recognize any change, intrusion or activity to your firewall IN REAL TIME. That’s the key. There isn’t the lag of looking at all the logs a week later when the damage has been done, or not being able to tell a suspicious action from a white noise false positive. Whereas many SIEM products can do just this, ones from the cloud provide the additional benefit of 7/24/365 monitoring across the entire enterprise. And, you get a scope of visibility of Fortune 500 class protection for literally pennies on the dollar.

For further information visit: http://cloudcomputing.sys-con.com/node/2435195

Cloud-Integrated Storage Broadens Its Appeal

Cloud-Integrated Storage Broadens Its Appeal
While there is no denying that cloud storage has delivered the promise of unlimited “pay-as-you-go” storage capacity, simplified disaster recovery, and savings in costs and maintenance, these attributes alone aren’t driving the growing business adoption. Instead, it is the rise of cloud-integrated storage appliances, which have augmented cloud storage to provide the levels of security, availability, connectivity and performance found in traditional storage systems, that has made cloud storage a viable choice for business.
With this week’s announcement of TwinStrata CloudArray 4.0, the flexibility, availability and performance of cloud-integrated storage has improved further, narrowing the functionality gap between cloud-integrated storage and traditional data storage systems, while leveraging all of the benefits of cloud. Some of the highlights include:
Choice of server connectivity: With iSCSI and NAS connectivity options, the broadest range of applications now seamlessly interoperate with cloud storage, regardless of whether the requirement is file or block-based access
High-performance: New appliances offering hybrid SSD configurations enable demanding applications to use cloud storage without the expected performance tradeoff of cloud storage “Future-proof” platform flexibility: New in-cloud platforms, in addition to virtual and physical platforms, protect investment in cloud-integrated storage that can continue to be leveraged even if your environment migrates from physical to virtual or entirely into the cloud
Broadest choice of providers: Over 20 different cloud providers to choose from means there is never any worry of vendor lock and always the choice of best-of-breed cloud storage Higher availability: Fully redundant appliances with no single point of failure allow data to either be stored directly to cloud or local copies replicated to the cloud, minimizing risk of downtime and offering a built-in disaster strategy without offsite infrastructure
If you haven’t yet considered augmenting your IT environment with cloud-integrated storage, now is the time to examine all of the benefits cloud storage can offer. We’ll be hosting a webinar next Wednesday to talk about our new enterprise-class capabilities and how our customers are using cloud-integrated storage to streamline their storage environments. If you’d like to join us, feel free to register at https://www1.gotomeeting.com/register/586633720

Friday, 9 November 2012

Five Essential Components of Virtual Desktop ROI

Server virtualization was, for many in IT, a major win. IT departments and data centers were suddenly able to do a whole lot more with a whole lot fewer resources. Naturally, as time goes on, it’s become more and more attractive for IT to consider desktop virtualization. Yet, the virtual desktop requires an infrastructure that’s simply not in place for many companies, and the ROI isn’t always clear from the start.
If you’re going to see desktop virtualization pay off for your organizations, there are five factors you need to look at closely:
Hardware failure. Desktop computers have components prone to failure. Virtual desktop clients are increasingly using solid-state components, dramatically reducing the number of moving parts – which are the most likely cause of failure. Some are even avoiding fans and complex motherboards that can be short-circuited. If you can reduce hardware failure by 75%, virtualizing the desktop starts to look really attractive.
The cost to upgrade. Upgrading the processing power of a virtualized desktop is as simple as reallocating VM resources. You don’t have to order new desktops or desktop components, and you don’t have to deploy them either. End users support and management. You don’t need to remote in to work on a virtual desktop; you simply open your hypervisor, just like you would with a virtual server. You can manage the boot of any virtual desktop from your office, freeing up valuable staff time.
Deployment and scalability. While larger organizations might have pre-configured desktops sitting around waiting for deployment, most organizations don’t have that luxury. A new virtual desktop means simply pushing a pre-loaded template. It can be done in less than an hour.
Performance. With virtual desktops, the only potential bottleneck is I/O. All of the data opens across the network backbone, rather than out to the edge. Virtualized desktops often give an increase in performance as well as a reduction in latency.
If your organization is trying to justify desktop virtualization, take a look at things from the perspective of these five factors and see whether it can work for you.
For further information visit: http://cloudcomputing.sys-con.com/node/2432059

Cloud Computing : Big Data at Sears


Sears plus its acquired entity Kmart belong to Sears Holdings whose goal is to get closer to its customers. That requires big time analytic capabilities. While revenue at Sears has declined from $50B in 2008 to $42B in 2011, rivals like Wal-Mart. Target and Amazon have grown steadily with better profit. Amazon’s retail business has gone from $19B in revenue in 2008 to $48B in 2011, passing Sears for the first time.
Sears used IMS (IBM’s first generation database product) on mainframe plus Teradata. Its ETL process using IBM DataStage software on a cluster of distributed servers took 20 hours to run. Since their adoption of Hadoop back in 2010, one of the steps (taking 10 hours out of the 20 hours) ran at 17 minutes. Their slogan is “ETL must die”, as they would like to load raw data directly to Hadoop. The old systems consisted of EMC Greenplum, Microsoft SQL Server, and Oracle Exadata (four boxes) for analytical workload. That is all being replaced by Hadoop, Datameer, MySQL, InfoBright, and Teradata.
Sears’ process for analyzing marketing campaigns for loyalty club members used to take six weeks on mainframe, Teradata, and SAS servers. The new process running on Hadoop can be completed weekly. For certain online and mobile commerce scenarios, Sears can now perform daily analyses. The Hadoop systems at 200 Terabytes cost about one-third of 200-TB relational platforms. Mainframe costs have been reduced by more than $500K per year while delivering 50-100 times better performance on batch jobs. The volume of data on Hadoop is currently at 2 Petabytes. As the CTO says, Hadoop is no longer a science project at Sears – critical reports run on the platform, including financial analyses; SEC reporting; logistics planning; and analysis of supply chains, products, and customer data. Sears uses Datameer, a spread-sheet style tool that supports data exploration and visualization directly on Hadoop. It claims to develop interactive reports in 3 days that used to take six to 12 weeks before.
Sears has actually spun off a new subsidiary called MetaScale to offer cloud services to other retailers with Hadoop platform. They are leveraging their three years of acquired expertise in Hadoop to make money in analytic services. There are many open questions on whether Hadoop will be that platform that brings big success to Sears in the future.
For further information visit: http://cloudcomputing.sys-con.com/node/2433869

Thursday, 8 November 2012

Is the Way to the European Cloud Paved Mainly with Good Intentions?

Is the Way to the European Cloud Paved Mainly with Good Intentions?
At the end of last month the EU released its plans for "Unleashing the Potential of Cloud Computing in Europe". But although the document (s) - just like EU commissioner Kroes in this video - do a good job describing in non-technical terms what cloud is and why Europe should care about having a competitive cloud position,  it kind of stops there.
Even though it defines three key actions - around Standards, Terms and Public Sector taking a lead role - most described actions consist of softer items such as  "promoting trust by coordinating with stakeholders", "identifying best practices," "promoting partnerships" and "investigating how to make  use of other available instruments." Now of course European cloud computing can benefit from funding reserved for other EU initiatives such as the Connecting Europe Facility and from side initiatives such as the "Opinion on Cloud Computing" published by the Article 29 working party that gives privacy-related contracting guidance, but in general the recent published plan seems to be more about what could and should be, than about what is or will be.
Meanwhile, both regular and social media seem to be increasingly negative regarding the progress that Europe is making. With the North American continent clearly being the biggest cloud geo and ASIAPAC - also thanks to its many emerging economies - claiming the position of fastest growing cloud geo, it only leaves less desirable labels - such as slowest or most fragmented - for describing the state of cloud activities in Europe.
Continuing to look at why things are harder and slower in Europe will just further reinforce negative sentiments, better to focus on European examples that are showing success. And in "Switch: How to Change Things When Change Is Hard"  the brothers Dan and Chip Heath offer an engaging recipe for doing just that. In their book they describe how by identifying “Bright Spots" (small pockets of positive exceptions) potential future success scenarios can be discovered. Next, they encourage promoting very specific actions instead of giving broad directions. For example: Instead of asking people to eat healthier (too vague, too hard), they suggest healthcare activists promote a specific action such as "buying skimmed instead of full fat milk" (simpler, easier, more actionable, more effective).
So in Europe, instead of pushing cloud as a concept (too vague, too hard), why not focus on identifying a few very specific and very simple scenarios including their specific benefits. Next Europe can concentrate on removing any (legal, fiscal, economic, cultural) barriers to these specific scenarios and promote these few clearly and broadly. And in doing so best to follow the Heath brothers advice to promote this both on a rational and on an emotional level (or as the brothers put it eloquently: both "Direct the Rider and Motivate the Elephant”).
P.S. What potential European cloud Bright Spots would you suggest (using the comment field on this blog)?
For further information visit: http://cloudcomputing.sys-con.com/node/2420564

Little Data, Big Data and Very Big Data (VBD) or Big BS?

Little Data, Big Data and Very Big Data (VBD) or Big BS?
This is an industry trends and perspective piece about big data and little data, industry adoption and customer deployment.
If you are in any way associated with information technology (IT), business, scientific, media and entertainment computing or related areas, you may have heard big data mentioned. Big data has been a popular buzzword bingo topic and term for a couple of years now. Big data is being used to describe new and emerging along with existing types of applications and information processing tools and techniques.
I routinely hear from different people or groups trying to define what is or is not big data and all too often those are based on a particular product, technology, service or application focus. Thus it should be no surprise that those trying to police what is or is not big data will often do so based on what their interest, sphere of influence, knowledge or experience and jobs depend on.
Not long ago while out travelling I ran into a person who told me that big data is new data that did not exist just a few years ago. Turns out this person was involved in geology so I was surprised that somebody in that field was not aware of or working with geophysical, mapping, seismic and other legacy or traditional big data. Turns out this person was basing his statements on what he knew, heard, was told about or on sphere of influence around a particular technology, tool or approach.
Fwiw, if you have not figured out already, like cloud, virtualization and other technology enabling tools and techniques, I tend to take a pragmatic approach vs. becoming latched on to a particular bandwagon (for or against) per say.
Not surprisingly there is confusion and debate about what is or is not big data including if it only applies to new vs. existing and old data. As with any new technology, technique or buzzword bingo topic theme, various parties will try to place what is or is not under the definition to align with their needs, goals and preferences. This is the case with big data where you can routinely find proponents of Hadoop and Map reduce position big data as aligning with the capabilities and usage scenarios of those related technologies for business and other forms of analytics.
Not surprisingly the granddaddy of all business analytics, data science and statistic analysis number crunching is the Statistical Analysis Software (SAS) from the SAS Institute. If these types of technology solutions and their peers define what is big data then SAS (not to be confused with Serial Attached SCSI which can be found on the back-end of big data storage solutions) can be considered first generation big data analytics or Big Data 1.0 (BD1 ;) ). That means Hadoop Map Reduce is Big Data 2.0 (BD2 ;) ;) ) if you like, or dislike for that matter.
Funny thing about some fans and proponents or surrogates of BD2 is that they may have heard of BD1 like SAS with a limited understanding of what it is or how it is or can be used. When I worked in IT as a performance and capacity planning analyst focused on servers, storage, network hardware, software and applications I used SAS to crunch various data streams of event, activity and other data from diverse sources. This involved correlating data, running various analytic algorithms on the data to determine response times, availability, usage and other things in support of modeling, forecasting, tuning and trouble shooting. Hmm, sound like first generation big data analytics or Data Center Infrastructure Management (DCIM) and IT Service Management (ITSM) to anybody?
Now to be fair, comparing SAS, SPSS or any number of other BD1 generation tools to Hadoop and Map Reduce or BD2 second generation tools is like comparing apples to oranges, or apples to pears. Let’s move on as there is much more to what is big data than simply focus around SAS or Hadoop.
This is where some interesting discussions, debates or disagreements can occur between those who latch onto or want to keep big data associated with being something new and usually focused around their preferred tool or technology. What results from these types of debates or disagreements is a missed opportunity for organizations to realize that they might already be doing or using a form of big data and thus have a familiarity and comfort zone with it.
By having a familiarity or comfort zone vs. seeing big data as something new, different, hype or full of FUD (or BS), an organization can be comfortable with the term big data. Often after taking a step back and looking at big data beyond the hype or fud, the reaction is along the lines of, oh yeah, now we get it, sure, we are already doing something like that so let’s take a look at some of the new tools and techniques to see how we can extend what we are doing.
Likewise many organizations are doing big bandwidth already and may not realize it thinking that is only what media and entertainment, government, technical or scientific computing, high performance computing or high productivity computing (HPC) does. I'm assuming that some of the big data and big bandwidth pundits will disagree, however if in your environment you are doing many large backups, archives, content distribution, or copying large amounts of data for different purposes that consume big bandwidth and need big bandwidth solutions.
Yes I know, that's apples to oranges and perhaps stretching the limits of what is or can be called big bandwidth based on somebody's definition, taxonomy or preference. Hopefully you get the point that there is diversity across various environments as well as types of data and applications, technologies, tools and techniques.
What about little data then?
I often say that if big data is getting all the marketing dollars to generate industry adoption, then little data is generating all the revenue (and profit or margin) dollars by customer deployment. While tools and technologies related to Hadoop (or Haydoop if you are from HDS) are getting industry adoption attention (e.g. marketing dollars being spent) revenues from customer deployment are growing. If little data is databases and things not generally lumped into the big data bucket, and if you think or perceive big data only to be Hadoop map reduce based data, then does that mean all the large unstructured non little data is then very big data or VBD?
For further information visit: http://cloudcomputing.sys-con.com/node/2420582

Friday, 2 November 2012

Microsoft Private Cloud 2.0 – BYOD Virtual Infrastructure

Microsoft Private Cloud 2.0 – BYOD Virtual Infrastructure

 

BYOD Private Cloud

Microsoft provides reference documents for describing their Private Cloud Fast-track best practices, which explains in detail how their core suite of Windows Server, Hyper-V and Systems Management Centre can be used to build internal ‘IaaS’ – Infrastructure as a Service.

They position this as an enabler of PaaS and SaaS, the higher layer application capabilities that this platform can better enable. In this paper Flexible Work styles they start to look at the solution areas these types of platforms can be targeted. Microsoft headline this around the transformational effect upon IT of trends like ‘BYOD’ – Bring Your Own Device, and how in enterprise IT design terms this is forcing an evolution from device-centric to user-centric architecture, and with this in mind describe the major options for mobile-enabling applications:

Device-optimized applications – For example Microsoft Office Mobile has been developed to provide Windows Phone users a tailored version of Word, Outlook, Lync etc. on their device. Web applications - Any device that can access the web can therefore access any web-based SaaS resource. This is more ubiquitous in an off-the-shelf manner but the device experience is not as good.

Virtual Desktops and Applications - The ideal compromise of the two is ‘VDI’ – Virtual Desktop Infrastructure, achieved through the Microsoft Desktop Virtualization portfolio and Microsoft Application Virtualization (App-V).

Microsoft propose this latter option is the ideal one for enterprise users because it provides an architecture and set of utilities for enhancing and managing the important aspects of this scenario, such as:

User State Virtualization - Users shifting about between different devices and networks need their personal data to travel with them, not be locked to one specific device. USV is one mechanism to support this.

Unified Management - Tools like Configuration Manager uses variables such as user identity, application dependencies and network and device characteristics to dynamically determine the appropriate deployment type for a specific device.

Unified Asset Inventory and Device Management - This also encompasses Asset and also Device Management enabling IT staff to better track and control all IT resources and where needed remotely manage them, such as performing device data wipes.

End-to-end Security - To ensure compliant protection of devices there are a variety of security features, such as drive encryption, anti-malware, tie in with IPSec for VPN security and AD-based Rights Management, offering end-to-end protection from device right through network.

For further information visit: http://cloudcomputing.sys-con.com/node/2372354

Enhancing Microsoft Exchange 2013

Enhancing Microsoft Exchange 2013

Microsoft Exchange load balancing is just the beginning…Throughout the years, F5 BIG-IP has been a critical component supporting Microsoft Exchange to implement a variety of performance, security, and architectural requirements.

During that time, we've seen Microsoft Exchange evolve itself from a fairly simple, small business solution to a robust enterprise-class solution with an integrated ecosystem of services providing for communication, collaboration, and cooperation. As Microsoft prepares to launch its latest version of Exchange, again we're seeing some evolutionary changes in its architecture. Most prominent is the elimination of a requirement for persistence; the Client Access Server (CAS) component is now a stateless proxy. For those paying attention through the years, the implementation of persistence within Microsoft Exchange deployments was more often than not architecturally designated to an F5 BIG-IP.

Does the elimination of the requirement render BIG-IP obsolete?

Of course not. While there's been some conjecture that layer 4 load balancing services will suffice for CAS 2013 (and for simple load balancing scenarios, it will) such statements are short-sighted in recognizing the increasing role of mobile and roaming clients, and the need to address core performance and security of public-facing applications (of which Exchange is certainly one). The delegation of persistence management to BIG-IP was often deemed most efficient because BIG-IP was a part of the architecture for other application delivery services – perimeter security, performance, server efficiency, multi-site resiliency, and, of course, scalability.

Scale and multi-site resiliency are imperatives today, with growth of users and devices and locations from which e-mail needs to and will be accessed. A distributed workforce can't afford to lose productivity due to slow delivery of e-mail or inability to readily access important content via any access medium, regardless of location. These are the kinds of challenges F5 BIG-IP addresses over and above routine tasks like load balancing.

These challenges have not been eliminated with Microsoft's most recent version of Exchange, and BIG-IP is still the ADC of choice for providing these services for deployments large and small. BIG-IP does layer 4 load balancing just as well as layer 7, after all, but also offers a robust set of delivery services that go well beyond either function. Ryan Korock, Technical Director focusing on Microsoft-partner initiatives, has a great list of 8 reasons why an ADC remains invaluable to Microsoft Exchange implementations that goes into more detail on what BIG-IP has – and continues – to offer Microsoft Exchange deployments.

For further information visit: http://cloudcomputing.sys-con.com/node/2372344

Gartner Highlights the Importance of Third-Party Validation

Gartner Highlights the Importance of Third-Party Validation

Gartner recently published a report that highlights the growing importance of Cloud Access Security Brokers - solution providers that offer unified cloud computing security platforms. This solution category includes a new class of products that Gartner terms Cloud Encryption Gateways, which encrypt or tokenize sensitive information before it leaves an organization's firewall. These solutions, if designed properly, allow organizations to maintain control of sensitive data since they replace the original "clear-text" values with indecipherable replacement values in the cloud. Businesses are adopting these solutions to address issues raised by data residency requirements and data privacy regulations driven by a host of industry compliance mandates. In addition to enabling organizations to satisfy the data protection needs, products like those from PerspecSys also preserve the user experience with the SaaS application (such as Salesforce.com or Oracle CRM). With PerspecSys, critical functionality like Search is retained even when strong encryption (e.g., FIPS 140-2 validated modules) or tokenization is used to protect the data being sent to the cloud.

Gartner highlights the importance of using strong tokenization capabilities that have been evaluated by an independent third party. Practitioners from the payment card industry, where I spent quite a few years, are very familiar with this requirement.

Enterprises should make sure the providers they depend on to satisfy regulatory compliance or strict data privacy and residency requirements can deliver on the expected results. One way is to look for assessments from third parties like I referenced above. Well-qualified independent auditors that use established testing and evaluation criteria can validate that solutions are doing what providers say they do. This type of assessment ought to be a no-brainer for the technology providers and is something that enterprises should expect.

What else? Well, it may seem intuitive, but another important step is to look for products that use well-vetted and accepted industry approaches. For example, within the PerspecSys solution, we made great efforts to ensure that our customers could use industry-standard cryptographic modules that they have approved based on internally established screening criteria as well as external benchmarks, such as NIST FIPS 140-2 validation. When we initially began designing our solution, we considered developing a proprietary encryption algorithm that would make it simpler for us to preserve SaaS application functionality such as "Searching" and "Sorting" on data that was encrypted inside of the cloud. Creation of such an algorithm requires the designer to tweak and modify ("weaken") a strong algorithm in order to get the desired result. But when we considered the long-term ramifications of this approach, we understood that it ran completely counter to what enterprise security organizations would (and should) expect from a solution meant to protect their most sensitive business data. Standards-based security, robust and scalable, without exception - this continues to be a central design principle that enterprise security professionals require, and what we deliver as evidenced by the award-winning PerspecSys Cloud Data Protection Gateway.

Using Deep Virtualization to Rationalize Platforms and Data Centers

Using Deep Virtualization to Rationalize Platforms and Data Centers

The latest Briefings Direct end-user case-study uncovers how outerwear and sportswear maker and distributor Columbia Sportswear has used virtualization techniques and benefits to significantly improve their business operations.

We’ll see how Columbia Sportswear’s use of deep virtualization assisted in rationalizing its platforms and data center, as well as led to benefits in their enterprise resource planning (ERP) implementation. We’ll also learn how virtualizing mission-critical applications formed a foundation for improved disaster recovery (DR) best practices.

Here are some excerpts:

Gardner: Tell me a little bit about how you got into virtualization. What were some of the requirements that you needed to fulfill at the data center level?

Leeper: Pre-2009, we'd experimented with virtualization. It'd be one of those things that I had my teams working on, mostly so we could tell my boss that we were doing it, but there wasn’t a significant focus on it. It was a nice toy to play with in the corner and it helped us in some small areas, but there were no big wins there.

Columbia Sportswear is the worldwide leader in apparel and accessories. We sell primarily outerwear and sportswear products, and a little bit of footwear, globally. We have about 4,000 employees, 50 some-odd physical locations, not counting retail, around the world. The products are primarily manufactured in Asia with sales distribution happening in both Europe and United States.

My teams out of the U.S. manage our global footprint, and we are the sole source of IT support globally from here. In mid-2009, the board of directors at Columbia decided that we, as a company, needed a much stronger DR plan. That included the construction of a new data center for us to house our production environments offsite.

Extremely successful

We were extremely successful in that process. We were able to move our primary data center over a couple of weekends with very little downtime to the end users, and that was all built on VMware technology.

About a week after we had finished that project, I got a call from our CIO, who said he had purchased a new ERP system, and Columbia was going to start down the path of a fully new ERP implementation.

I was being asked at that time what platform we should run it on, and we had a clean slate to look everywhere we could to find what our favorite, what we felt was the most safe and stable platform to run the crown jewels of the company which is ERP. For us that was going to be the SAP stack.

Private cloud

Leeper: We consider ourselves having up a private cloud on-site. My team will probably start laughing at me for using that term, but we do believe we have a very flexible and dynamic environment to deploy, based on business request on premises, and we're pretty proud of that. It works pretty well for us.

Where we go next is all over the place. One of the things we're pretty happy about is the fact that we can think about things a little differently now than probably a lot of our peers, because of how migratory our workloads can be, given the virtualization. We started looking into things like hybrid cloud approaches and the idea of maybe moving some of our workloads out of our premises, our own data facilities, to a cloud provider somewhere else.

For us, that's not necessarily the discussion around the classic public cloud strategies for scalability and some of those things. For us, it's a temporary space at times, if we are, say, moving an office, we want to be able to provide zero downtime, and we have physical equipment on-premises.

It would be nice to be able to shutdown their physical equipment, move their data, move their workloads up to a temporary spot for four or five weeks, and then bring it back at some point, and let users never see an outage while they are working from home or on the road. There are some interesting scenarios around DR for us and locations where we don't have real-time DR set up.

There are some interesting scenarios around significant DR for us and locations where we don't have real-time DR set up. For instance, we were looking into some issues in Japan, when Japan unfortunately a year or so ago was dealing with the earthquake and the tsunami fallout in power.

Thursday, 1 November 2012

The API: From Hypertext to Hyper data

The API: From Hypertext to Hyper data

A friend shared a recent C|Net article discussing the use of 404 error pages to feature missing children notices. Following links leads to a European effort to integrate information about missing children into 404 pages (and others, there's no restriction that it be on a 404). By signing up, you're offered some fairly standard HTML code to embed in the page. It's very similar to advertising integration.

So I jumped on over to the US' National Center for Missing and Exploited Children (NCMEC) hoping to find something similar or even better, an API. I was disappointed to find no real way to integrate the same data – not even simple dynamic HTML. All that data – all those children – are missing out on opportunities for exposure. Exposure that might mean being found.

API EVOLUTION

One of the things Google did right very early on was recognizing that API access would be a significant factor in the success of any web site. Now certainly Google's early APIs were little more than HTTP GETs or POSTs that could easily be integrated into other HTML which, on the surface, is really not all that innovative. After all, the entire concept of hypertext is based on the premise of linking together information using HTTP. But it – and others that followed like Facebook have continued to move along an evolutionary path that has graduated from hypertext to hyperdata – integration via RESTful APIs that return data, not HTML text, and enable hypertext vs hyperdatausage and display of that in a format more suitable to the integrator and able to be integrated with other services such as maps or other sources of data.

That's important, because while HTML might be great for the web it's not always in the right format for the platform. Perhaps I'd like to be able to include a brief "child missing in your area" alert on any page – or in a header or footer or sidebar - that then links to more information, giving users the opportunity to find out more and serving the community but doing so in a way that flows naturally in my site or mobile application. I'd also like to localize that data, so as end-users roam so does information on which missing children are highlighted.

Widgets and gadgets – terms which are being appropriated by mobile now – used to offer one of several choices of formats, similar to the options presented to mobile users on tablets today. It's about size and style, but not necessarily about presentation and design. Data is displayed, for the most part, in a way the designer decides. Period. Integration options assume display choices and formats that simply might not fit with a site or ends up being ignored because it doesn't provide information in a format useful to the viewer.

AMBER alerts, for example, can be received via text messages now. But a text message doesn't necessarily help unless I'm really familiar with the area and have a good sense of direction. If the data were delivered in a simple standard format, it could quickly be displayed on a mapping application that showed me exactly where the child had gone missing in relation to where am I. But because the data is constrained, it's limited to a few zip codes per subscriber and alerts don't offer an easy way to figure out exactly where "9th and Maple" might be.

The lack of an API and a focus on hyperdata rather than hypertext, a focus on offering data rather than pre-formatted information, could mean missed opportunities. An application today that doesn't integrate well with others with a data-focused API would be considered too legacy to succeed, especially an application that purports to focus on sharing data. Such applications need to offer access to that data or it will not succeed.

In the case of web applications and infrastructure and social networking that may mean simply revenue left on the table. But in others, it may mean someone's child isn't going to be found. That is a big deal and it's something that a hyperdata approach and API might actually help with, if it was given the opportunity.

For further information visit: http://cloudcomputing.sys-con.com/node/2393171

Why Data Breaches Occur and How You Can Lessen Their Impact

Why Data Breaches Occur and How You Can Lessen Their Impact

One of the dirty little secrets about security: there is simply no way to make your company impervious to a data breach. It's almost a statistical certainty that you will, at some point or another, be hit with a security scenario that you're not prepared for. That's why security today is as much about damage control as it is about breach avoidance.

Consider the following:

Most breaches aren't that hard to execute

Attacks on corporate networks and data occur at alarming frequency. You might think that's because attackers have become more sophisticated, but that's not necessarily the case. In fact, the most recent Verizon Security Breach study suggests a hacker with fairly rudimentary skills could've pulled off the majority of attacks in 2012.

And these attacks aren't isolated to large banks and government entities - they're pervasive across all industries. The bottom line is, if you have important data, chances are someone else thinks it's important too -- and will do whatever it takes to get to it.

Compliance mandates are limited and vague

U.S. compliance guidelines for data and cyber security are noticeably vague, leaving it up to corporations to determine best practices for maintaining the privacy and confidentiality of sensitive data. As a result, organizations typically do just enough to achieve compliance, when in fact, compliance with HIPAA, FERPA, FISMA, PCI and others, should actually be the low bar.

When it comes to sensitive data, you can never be too safe. Let's say an email list gets breached. This isn't regulated data. You're not going to get fined for non-compliance, but PII is still compromised. This represents a significant failure on the part of the responsible corporation, one that ultimately leads to loss of customer trust.

Big data is big business

It's hard to have a conversation about technology where the phrase, "big data" doesn't come up. For all the advantages associated with capturing large volumes of diverse data at high speeds, there's an inherent risk in securing lots of sensitive data in massively distributed databases in the cloud. Each node -- and big data can have hundreds or even thousands - represents a point of failure where data can be accessed without authorization.

Don't forget about BYOD

Earlier this month, Google Chairman, Eric Schmidt announced there are 500 million Android devices worldwide, with 1.3 million new activations daily. There are about 365 million iOS devices in play right now, and a large percentage of those devices are infiltrating the workplace. In fact, 36% of all email is now being opened on a phone or tablet, many of which are accessing data inside your firewall.

Each of these phones, tablets and mobile devices represent potential security vulnerabilities. According to a site maintained by the US Department of Health and Human Services, 72% of data breaches dating back to 2009 stem from stolen, lost or improperly disposed of devices representing a total of 15.6 million individual health records. Device theft is pervasive, and the influx of mobile devices just presents more opportunity for sensitive regulatory and PII data to go missing.

Security keys are being mismanaged

Another concern is around the management of cryptographic keys, SSL certificates and other "opaque" objects. With the trend toward IT hybridization, organizations are being buried by a virtual avalanche of encryption keys, data tokens, SSL certificates, passwords and more.

If any of these security objects fell into the wrong hands, there's almost nothing in your corporate environment that wouldn't be at risk. Surprisingly, not a lot of forethought goes into the security, management, provisioning and revocation of these keys. In fact, we often hear stories about systems administrators storing keys in boot files or easily accessible spreadsheets on their hard drives. Think about it this way: You wouldn't lock your car and leave the keys in the driver's side door, would you?

The issues above only scratch the surface. There are still lingering questions and concerns about cloud security, authentication and ownership of data in SaaS applications to name a few more. On Monday, we'll look at some small things you can do that will have a profound impact on your data security profile. Stay tuned.

For further information visit: http://cloudcomputing.sys-con.com/node/2397295