Friday, 7 December 2012

Utility Computing Gets Closer in the Cloud


Jack Clark at ZDnet recently published a great series of articles on the current state of cloud computing, which included an article on utility computing called “Cloud computing’s utility future gets closer“. It’s one of the best reviews of where we are in the progression toward utility computing I’ve seen recently – probably since John Cowan’s blog series on a similar topic or the GigaOm white paper by Paul Miller called Metered IT: the path to utility computing.
A few key takeways from the article:
First, Clark states the cloud is changing nearly every aspect of the technology markets and more importantly how technology is accessed and used by organizations and individuals. Completely concur. The question of “what is cloud” is getting clearer every day. Cloud computing is clearly not just a new term for an old model, but a very real shift in the way IT resources are delivered and consumed.
Clark then defines a utility market as occurring “when an item has been commoditised to the point that it becomes very hard to differentiate on a technology basis, and instead companies distinguish themselves through different levels of service, availability and support.” In some ways, we have certainly gotten to that point in cloud computing and infrastructure-as-a-service (IaaS) as you see AWS adding new service levels, Google slashing prices, and Amazon responding with further price cuts, but keep in mind this focuses on the supply side of the equation – only half of the equation. There is also the demand side of the equation, which is more important given the fact that there are hundreds of IaaS suppliers out there today with many adding capacity as we speak. So how does that demand get fulfilled and what’s missing?
Clark then turns to what’s missing. Among other things… “There is not yet a clear market mechanism for homogenising compute and storage from different providers making them truly interchangeable.” Absolutely agree and this is the primary reason 6fusion created the WAC to create a single unit of measure for the measurement and metering of IT resource consumption across any environment – public, private, virtual or physical. One thing Clark didn’t mention was the fundamental need for a single unit of measure being a foundational component of any utility – think kilowatts in electricity or kilogram in coal.
What else is missing? “A trading methodology” according to Clark. I would add a trading platform is missing as well – or more specifically a marketplace of cloud brokers that serve as intermediaries between the supply and the demand of computing. As a further example, we agree with James Mitchell from Strategic Blue that “at the moment, cloud pricing is not rational”. To borrow another quote from Mitchell: ““can you imagine letting your electricity supplier bill you for your electricity using a measurement that they have made, using a meter that they invented, and then quoting it to you in a unit that they have pulled out of thin air, that cannot be compared to their competitors?  Ridiculous!” A single unit of measure and trading methodology and platform are critical here.
Clark concludes with this: “As cloud computing continues on its path to become a utility, the benefits to IT consumers will grow as prices are successively cut, but companies that cannot operate at the necessary scale of a utility are likely to run into problems.” Agreed – we are in the very early stages of utility computing and the land grab has just begun. However, the beauty of a commoditized utility is that anyone that can participate, both on the supply and demand side. You are likely to see a long tail market emerge with some very large players at the top (AWS, Google, Microsoft, etc) and many mid-size and smaller players survive in the market with access to global demand through utility computing exchanges when the immovable asset becomes movable.
What are your thoughts? How close are we to utility computing? What are the big barriers to getting there from your perspective? The post Utility computing gets closer in the cloud appeared first on 6fusion.
For further information visit: http://cloudcomputing.sys-con.com/node/2466924

The Limits of Cloud: Gratuitous ARP and Failover


Cloud is great at many things. At other things, not so much. Understanding the limitations of cloud will better enable a successful migration strategy. One of the truisms of technology is that takes a few years of adoption before folks really start figuring out what it excels at – and conversely what it doesn't. That's generally because early adoption is focused on lab-style experimentation that rarely extends beyond basic needs.
It's when adoption reaches critical mass and folks start trying to use the technology to implement more advanced architectures that the "gotchas" start to be discovered.
Cloud is no exception.
A few of the things we've learned over the past years of adoption is that cloud is always on, it's simple to manage, and it makes applications and infrastructure services easy to scale. Some of the things we're learning now is that cloud isn't so great at supporting application mobility, monitoring of deployed services and at providing advanced networking capabilities.
The reason that last part is so important is that a variety of enterprise-class capabilities we've come to rely upon are ultimately enabled by some of the advanced networking techniques cloud simply does not support. Take gratuitous ARP, for example. Most cloud providers do not allow or support this feature which ultimately means an inability to take advantage of higher-level functions traditionally taken for granted in the enterprise – like failover.
GRATUITOUS ARP and ITS IMPLICATIONS
For those unfamiliar with gratuitous ARP let's get you familiar with it quickly. A gratuitous ARP is an unsolicited ARP request made by a network element (host, switch, device, etc…) to resolve its own IP address. The source and destination IP address are identical to the source IP address assigned to the network element. The destination MAC is a broadcast address. Gratuitous ARP is used for a variety of reasons. For example, if there is an ARP reply to the request, it means there exists an IP conflict. When a system first boots up, it will often send a gratuitous ARP to indicate it is "up" and available. And finally, it is used as the basis for load balancing failover.
Most cloud environments do not allow broadcast traffic of this nature. After all, it's practically guaranteed that you are sharing a network segment with other tenants, and thus broadcasting traffic could certainly disrupt other tenant's traffic. Additionally, as security minded folks will be eager to remind us, it is fairly well-established that the default for accepting gratuitous ARPs on the network should be "don't do it".
The astute observer will realize the reason for this; there is no security, no ability to verify, no authentication, nothing. A network element configured to accept gratuitous ARPs does so at the risk of being tricked into trusting, explicitly, every gratuitous ARP – even those that may be attempting to fool the network into believing it is a device it is not supposed to be.
That, in essence, is ARP poisoning, and it's one of the security risks associated with the use of gratuitous ARP. Granted, someone needs to be physically on the network to pull this off, but in a cloud environment that's not nearly as difficult as it might be on a locked down corporate network. Gratuitous ARP can further be used to execute denial of service, man in the middle and MAC flooding attacks. None of which have particularly pleasant outcomes, especially in a cloud environment where such attacks would be against shared infrastructure, potentially impacting many tenants.
Thus cloud providers are understandably leery about allowing network elements to willy-nilly announce their own IP addresses. That said, most enterprise-class network elements have implemented protections against these attacks precisely because of the reliance on gratuitous ARP for various infrastructure services. Most of these protections use a technique that will tentatively accept a gratuitous ARP, but not enter it in its ARP cache unless it has a valid IP-to-MAC mapping, as defined by the device configuration. Validation can take the form of matching against DHCP-assigned addresses or existence in a trusted database.
Obviously these techniques would put an undue burden on a cloud provider's network given that any IP address on a network segment might be assigned to a very large set of MAC addresses. Simply put, gratuitous ARP is not cloud-friendly, and thus it is you will be hard pressed to find a cloud provider that supports it.
What does that mean?
That means, ultimately, that failover mechanisms in the cloud cannot be based on traditional techniques unless a means to replicate gratuitous ARP functionality without its negative implications can be designed.
Which means, unfortunately, that traditional failover architectures – even using enterprise-class load balancers in cloud environments – cannot really be implemented today? What that means for IT preparing to migrate business critical applications and services to cloud environments is a careful review of their requirements and of the cloud environment's capabilities to determine whether availability and uptime goals can – or cannot – be met using a combination of cloud and traditional load balancing services.
For further information visit: http://cloudcomputing.sys-con.com/node/2469233

Wednesday, 5 December 2012

Cloud Developers Challenged to Build Next Generation xRTML 3.0 Apps


Realtime, creator of the leading global technology framework and applications to power the Realtime web, announced a competition for developers to submit their xRTML 3.0 apps before a distinguished panel of judges, including developer Peter Lubbers of Google and Sam Wierema of TheNextWeb. The latest release of xRTML 3.0, the eXtensible Realtime Multiplatform Language, is transforming the World Wide Web into the Realtime Web.
The new xRTML 3.0 flattens the learning curve for building next-generation apps with bidirectional Realtime communications. Its instantaneous updates employ a fraction of the bandwidth required of traditional request/response and near-real time technologies. Major enhancements include:
 • A more robust, coherent and flexible core framework.
 • Provision for the use of multiple versions.
 • Beta release of a storage layer with built-in connection and security protocol, and provision of read/write permissions.
 • A new templating system focused on data rather than form, making it an invaluable tool for data-dependent applications.
 • An inheritance model to simplify processes and make it easier to extend tags or create new ones from scratch.
 • Metadata to provide a virtual roadmap within your browser.
For further information visit: http://cloudcomputing.sys-con.com/node/2463061

Gartner Highlights Cloud Data Encryption Gateways


Last month Gartner Analyst Jay Heiser conducted an extremely informative and thought-provoking webinar entitled "The Current and Future State of Cloud Security, Risk and Privacy." During the presentation, Mr. Heiser highlighted what he called the "Public Cloud Risk Gap," characterized in part by inadequate processes and technologies by the cloud service providers and in part by a lack of diligence and planning by enterprises using public cloud applications. In many ways, it was a call to arms to ensure that adequate controls, thought and preparation are put to use before public clouds are adopted by enterprises and public sector organizations.
From the side of the cloud application provider, the webinar noted that most cloud service offerings are incomplete when measured against traditional "on-premise" security standards, there are relatively few security-related Service Level Agreements (SLAs), and there is minimal transparency on the security posture of most cloud services. From the enterprise side (the cloud service consumer), he points out that they frequently come to the table with inadequate planning and consideration in the area of security requirements definition and have an incomplete data sensitivity classification governing their data assets. Despite this, the webinar highlighted that organizations of all sizes are increasingly willing to place their data externally, and they are increasingly likely to have at least some formalized processes for the assessment of the associated risk - which is good news.
One innovative part of this new category of solutions is referred to by Gartner as "Cloud Encryption Gateways." These gateways put sensitive data control back into the hands of the enterprise in scenarios where they are using public cloud services. When designed and deployed correctly, they are able to preserve the end user's experience with the cloud application (think of things like "Search" and "Reporting") even while securing the data being processed and stored in the cloud. These Gateways intercept sensitive data while it is still on-premise and replace it with a random tokenized or strongly encrypted value, rendering it meaningless should anyone hack the data while it is in transit, processed or stored in the cloud. If encryption is used, the enterprise controls the key. If tokenization is used, the enterprise controls the token vault. But not all gateways are created equal, so please refer to this recent paper in our Knowledge Center to make sure you ask the right questions when determining which gateway is the right fit for your specific Security, IT and End User needs.

Facebook to roll out HTTPS by default to all users


IDG News Service - Facebook started encrypting the connections of its North American users by default last week as part of a plan to roll out always-on HTTPS (Hypertext Transfer Protocol Secure) to its entire global user base.
For the past several years, security experts and privacy advocates have called on Facebook to enable always-on HTTPS by default because the feature prevents account hijacking attacks over insecure networks and also stops the governments of some countries from spying on the Facebook activities of their residents.
Despite the feature's security benefits, Facebook announced the start of its HTTPS rollout in a post on its Developer Blog last week, and not through its security page or its newsroom."As announced last year, we are moving to HTTPS for all users," Facebook platform engineer Shireesh Asthana said Thursday in a blog post that also described many other platform changes and bug fixes relevant to developers. "This week, we're starting to roll out HTTPS for all North America users and will be soon rolling out to the rest of the world."
It's not clear when exactly the rollout for the rest of the world will start. "We have no dates to provide at this time, but we will be continuing with a global rollout in the near future," said Facebook spokesman Fred Wolens Tuesday via email. The Electronic Frontier Foundation (EFF), digital rights organizations, welcomed the move via Twitter on Monday describing it as a "huge step forward for encrypting the web."
The EFF has long been a proponent of always-on HTTPS adoption. In collaboration with the Tor Project, creator of the Tor anonymizing network and software, the EFF maintains a browser extension called HTTP Everywhere that forces always-on HTTPS connections by default on websites that only support the feature on an opt-in basis. Twitter, Gmail and other Google services have HTTPS already turned on by default.
Facebook launched always-on HTTPS as an opt-in feature for users in January 2011. However, the initial implementation was lacking because whenever users launched a third-party application that didn't support HTTPS on the website, the entire Facebook connection was switched back to HTTP.
In order to address this problem, in May 2011 Facebook asked all platform application developers to acquire SSL certificates and make their apps HTTPS-compatible by Oct. 1 that same year.
"It is far from a simple task to build out this capability for the more than a billion people that use the site and retain the stability and speed we expect, but we are making progress daily towards this end," Wolens said. "We have already deployed significant performance enhancements to our load balancing infrastructure to mitigate most of the impact of moving to HTTPS, and will be continuing this work as we deploy this feature. In the meantime, we have been working with developers to ensure that their third-party applications are transitioned to HTTPS, and most have already completed this process."

Symantec spots odd malware designed to corrupt databases


- Symantec had spotted another odd piece of malware that appears to be targeting Iran and is designed to meddle with SQL databases.
The company discovered the malware, called W32.Narilam, on Nov. 15 but on Friday published a more detailed writeup by Shunichi Imano. Narilam is rated as a "low risk" by the company, but according to a map, the majority of infections are concentrated in Iran, with a few in the U.K., the continental U.S. and the state of Alaska.
Interestingly, Narilam shares some similarities with Stuxnet, the malware targeted at Iran that disrupted its uranium refinement capabilities by interfering with industrial software that ran its centrifuges. Like Stuxnet, Narilam is also a worm, spreading through removable drives and network file shares, Imano wrote.
Once on a machine, it looks for Microsoft SQL databases. It then hunts for specific words in the SQL database -- some of which are in Persian, Iran's main language -- and replaces items in the database with random values or deletes certain fields. Some of the words include "hesabjari," which means current account; "pasandaz," which means savings; and "asnad," which means financial bond, Imano wrote.
"The malware does not have any functionality to steal information from the infected system and appears to be programmed specifically to damage the data held within the targeted database," Imano wrote. "Given the types of objects that the threat searches for, the targeted databases seem to be related to ordering, accounting, or customer management systems belonging to corporations."
The types of databases sought by Narilam are unlikely to be employed by home users. But Narilam could be a headache for companies that use SQL databases but do not keep backups."The affected organization will likely suffer significant disruption and even financial loss while restoring the database," Imano wrote. "As the malware is aimed at sabotaging the affected database and does not make a copy of the original database first, those affected by this threat will have a long road to recovery ahead of them."
Stuxnet is widely believed to have been created by the U.S. and Israel with the intent of slowing down Iran's nuclear program. Since its discovery in June 2010, researchers have linked it to other malware including Duqu and Flame, indicating a long-running espionage and sabotage campaign that has prompted concern over escalating cyber conflict between nations.

Monday, 3 December 2012

Security in the Public Cloud Is a Shared Responsibility


How to secure your applications running in the Amazon Public Cloud
When you host applications in the public cloud, you assume partial responsibility for securing the application. The cloud provider, for example Amazon Web Services (AWS), secures the physical data center (with locked badge entry doors, fences, guards etc) in addition to securing the physical network with perimeter firewalls. This is no significant change from how you secure your corporate datacenter.
Just like you enhance the security of physical and virtual servers in your datacenter with host-based firewalls (ip tables, Windows firewall), anti-virus and intrusion detection, so you must protect your public cloud servers (in AWS parlance - "instances") with similar security measures. This is the joint or shared security responsibility - AWS secures the physical datacenter and firewalls the network; you the AWS customer secures each instance and its application with host-based firewalls , anti-virus and intrusion detection. In addition if your public cloud applications must be compliant, perhaps with PCI regulations, then you can add file integrity monitoring and log file monitoring to each AWS instance.
Security is shared; no blame goes around....Watch a quick demo how to enhance the security of your AWS instances and applications.
For further information visit: http://cloudcomputing.sys-con.com/node/2459176

Cloud 2.0: Re-inventing CRM


Cloud computing isn’t just re-inventing technology
Cloud computing isn’t just re-inventing technology; it will also drive evolution of the business practices that the technology is used for.
For example CRM: Customer Relationship Management. This is a science that started as simple contact management apps, like ACT!, through Goldmine then of course Salesforce.com.
After their ASP (Application Service Provider) phase of the Cloud evolution we’ve since had the social media explosion and so the principle category to add is “social media CRM”. After that came Cloud and so we’re now at a phase best described as Cloud 2.0.This is most powerfully demonstrated by the public sector, where CRM is about ‘citizen engagement’ and where the core expression of the model can be referenced through ‘CORE’ design, standing for Community Oriented Re-Engineering.
In short this reflects the simple point that online is about communities, and how you re-engineer your business processes to harness this principle is the fundamental nature of this CORE design and therefore how it can be used to implement a Cloud 2.0 strategy.
I have been so excited about the recent Canada Health Info way publication because they also reference the term Cloud 2.0, this design principle and most importantly, map it to possible action areas for the Canadian eHealth sector:
For further information visit: http://cloudcomputing.sys-con.com/node/2463839

Wednesday, 21 November 2012

Scientists Find Cheaper Way to Ensure Internet Security

Scientists Find Cheaper Way to Ensure Internet Security
Scientists at Toshiba and Cambridge University have perfected a technique that offers a less expensive way to ensure the security of the high-speed fiber optic cables that are the backbone of the modern Internet.
The research, which will be published Tuesday in the science journal Physical Review X, describes a technique for making infinitesimally short time measurements needed to capture pulses of quantum light hidden in streams of billions of photons transmitted each second in data networks. Scientists used an advanced photo detector to extract weak photons from the torrents of light pulses carried by fiber optic cables, making it possible to safely distribute secret keys necessary to scramble data over distances up to 56 miles.
Such data scrambling systems will most likely be used first for government communications systems for national security. But they will also be valuable for protecting financial data and ultimately all information transmitted over the Internet.
The approach is based on quantum physics, which offers the ability to exchange information in a way that the act of eavesdropping on the communication would be immediately apparent. The achievement requires the ability to reliably measure a remarkably small window of time to capture a pulse of light, in this case lasting just 50 picoseconds — the time it takes light to travel 15 millimeters.
The secure exchange of encryption keys used to scramble and unscramble data is one of the most vexing aspects of modern cryptography. Public key cryptography uses a key that is publicly distributed and a related secret key that is held privately, allowing two people who have never met physically to securely exchange information. But such systems have a number of vulnerabilities, including potentially to computers powerful enough to decode data protected by mathematical formulas.
If it is possible to reliably exchange secret keys, it is possible to use an encryption system known as a one-time pad, one of the most secure forms. Several commercially available quantum key distribution systems exist, but they rely on the necessity of transmitting the quantum key separately from communication data, frequently in a separate optical fiber, according to Andrew J. Shields, one of the authors of the paper and the assistant managing director for Toshiba Research Europe. This adds cost and complexity to the cryptography systems used to protect the high-speed information that flows over fiber optic networks.
Weaving quantum information into conventional networking data will lower the cost and simplify the task of coding and decoding the data, making quantum key distribution systems more attractive for commercial data networks, the authors said. Modern optical data networking systems increase capacity by transmitting multiple data streams simultaneously in different colors of light. The Toshiba-Cambridge system sends the quantum information over the same fiber, but isolates it in its own frequency.
“We can pick out the quantum photons from the scattered light using their expected arrival time at the detector,” Dr. Shields said. “The quantum signals hit the detector at precisely known times — everyone nanosecond, while the arrival time of the scattered light is random.”
Despite their ability to carry prodigious amounts of data, fiber-optic cables are also highly insecure. An eavesdropper needs only to bend a cable and expose the fiber, Dr. Shields said. It is then possible to capture light that leaks from the cable and convert it into digital ones and zeros.
“The laws of quantum physics tell us that if someone tries to measure those single photons, that measurement disturbs their state and it causes errors in the information carried by the single photon,” he said. “By measuring the error rate in the secret key, we can determine whether there has been any eavesdropping in the fiber and in that way directly test the secrecy of each key.”

PCI Compliance for Retailers from the Cloud Perspective

PCI Compliance for Retailers from the Cloud Perspective
One of the key drivers to IT security investment is compliance. Several industries are bound by various mandates that require certain transparencies and security features. They are designed to mitigate aspects of risk including maintaining the sacrosanctity of customer information, financial data and other proprietary information.
One such affected vertical is retail. No matter if you’re Wal-Mart or Nana’s Knitted Kittens, if you store customer information; if you process payments using customer’s credit cards, you are required by law to comply with a variety of security standards. Although there are several auditing agencies and mandating bodies, today we will concentrate on the one compliance agency that is typically applicable to every retailer-PCI.
PCI (Payment Card Industry) enforces Data Security Standards that looks to ensure that ALL companies that process, store or transmit credit card information maintain a secure environment. Now of course, not all merchants are created equal. Nana obviously doesn’t process the volume or the dollar amount of a national or even a high traffic regional retailer. However, this doesn’t let Nana off the hook. Her online shopping cart still needs to be Payment Application DSS validated (PCI compliant). She still is required to pass security audits of her network…just not as often.
But for the sake of this example, let’s assume you are a retailer who processes more than 20,000 transactions a year and the administrative burden of PCI is a real concern. In fact, it is a business necessity to maintain merchant accounts with VISA, American Express and MasterCard. And it is hugely important to keep the confidence of your customers. Fines for non-compliance aside, a breach of your network could cost millions of dollars. And that doesn’t begin to calculate the cost of customer defection through loss of trust.
Most, if not all, retailers have some sort of PCI monitoring in place. However, they are often cumbersome, expensive and resource heavy. Additionally, too many retail organizations don’t employ a compliance officer, much less a dedicated security person. This doesn’t mean these functions aren’t part of someone’s job description. Typically, they are yet another line item in a plethora of competing priorities and mission critical initiatives. In that security can be considered a cost center, the move to simply do the bare minimum to meet compliance is often an attractive alternative. Until now. Until the cloud. More specifically, a holistic enterprise security initiative deployed and managed from the cloud.
So how does cloud-based security/security-as-a-service meet the requirements of PCI while driving down costs, freeing up personnel resources and providing an easy-yet-comprehensive suite of capabilities and functions? The easiest way to illustrate the potential is to look at the individual PCI requirements and how they are addressed from the cloud:
1.    Protect Data: A cloud-based SIEM offering can accomplish the most important feature of this requirement: the ability to instantly recognize any change, intrusion or activity to your firewall IN REAL TIME. That’s the key. There isn’t the lag of looking at all the logs a week later when the damage has been done, or not being able to tell a suspicious action from a white noise false positive. Whereas many SIEM products can do just this, ones from the cloud provide the additional benefit of 7/24/365 monitoring across the entire enterprise. And, you get a scope of visibility of Fortune 500 class protection for literally pennies on the dollar.

For further information visit: http://cloudcomputing.sys-con.com/node/2435195

Cloud-Integrated Storage Broadens Its Appeal

Cloud-Integrated Storage Broadens Its Appeal
While there is no denying that cloud storage has delivered the promise of unlimited “pay-as-you-go” storage capacity, simplified disaster recovery, and savings in costs and maintenance, these attributes alone aren’t driving the growing business adoption. Instead, it is the rise of cloud-integrated storage appliances, which have augmented cloud storage to provide the levels of security, availability, connectivity and performance found in traditional storage systems, that has made cloud storage a viable choice for business.
With this week’s announcement of TwinStrata CloudArray 4.0, the flexibility, availability and performance of cloud-integrated storage has improved further, narrowing the functionality gap between cloud-integrated storage and traditional data storage systems, while leveraging all of the benefits of cloud. Some of the highlights include:
Choice of server connectivity: With iSCSI and NAS connectivity options, the broadest range of applications now seamlessly interoperate with cloud storage, regardless of whether the requirement is file or block-based access
High-performance: New appliances offering hybrid SSD configurations enable demanding applications to use cloud storage without the expected performance tradeoff of cloud storage “Future-proof” platform flexibility: New in-cloud platforms, in addition to virtual and physical platforms, protect investment in cloud-integrated storage that can continue to be leveraged even if your environment migrates from physical to virtual or entirely into the cloud
Broadest choice of providers: Over 20 different cloud providers to choose from means there is never any worry of vendor lock and always the choice of best-of-breed cloud storage Higher availability: Fully redundant appliances with no single point of failure allow data to either be stored directly to cloud or local copies replicated to the cloud, minimizing risk of downtime and offering a built-in disaster strategy without offsite infrastructure
If you haven’t yet considered augmenting your IT environment with cloud-integrated storage, now is the time to examine all of the benefits cloud storage can offer. We’ll be hosting a webinar next Wednesday to talk about our new enterprise-class capabilities and how our customers are using cloud-integrated storage to streamline their storage environments. If you’d like to join us, feel free to register at https://www1.gotomeeting.com/register/586633720

Friday, 9 November 2012

Five Essential Components of Virtual Desktop ROI

Server virtualization was, for many in IT, a major win. IT departments and data centers were suddenly able to do a whole lot more with a whole lot fewer resources. Naturally, as time goes on, it’s become more and more attractive for IT to consider desktop virtualization. Yet, the virtual desktop requires an infrastructure that’s simply not in place for many companies, and the ROI isn’t always clear from the start.
If you’re going to see desktop virtualization pay off for your organizations, there are five factors you need to look at closely:
Hardware failure. Desktop computers have components prone to failure. Virtual desktop clients are increasingly using solid-state components, dramatically reducing the number of moving parts – which are the most likely cause of failure. Some are even avoiding fans and complex motherboards that can be short-circuited. If you can reduce hardware failure by 75%, virtualizing the desktop starts to look really attractive.
The cost to upgrade. Upgrading the processing power of a virtualized desktop is as simple as reallocating VM resources. You don’t have to order new desktops or desktop components, and you don’t have to deploy them either. End users support and management. You don’t need to remote in to work on a virtual desktop; you simply open your hypervisor, just like you would with a virtual server. You can manage the boot of any virtual desktop from your office, freeing up valuable staff time.
Deployment and scalability. While larger organizations might have pre-configured desktops sitting around waiting for deployment, most organizations don’t have that luxury. A new virtual desktop means simply pushing a pre-loaded template. It can be done in less than an hour.
Performance. With virtual desktops, the only potential bottleneck is I/O. All of the data opens across the network backbone, rather than out to the edge. Virtualized desktops often give an increase in performance as well as a reduction in latency.
If your organization is trying to justify desktop virtualization, take a look at things from the perspective of these five factors and see whether it can work for you.
For further information visit: http://cloudcomputing.sys-con.com/node/2432059

Cloud Computing : Big Data at Sears


Sears plus its acquired entity Kmart belong to Sears Holdings whose goal is to get closer to its customers. That requires big time analytic capabilities. While revenue at Sears has declined from $50B in 2008 to $42B in 2011, rivals like Wal-Mart. Target and Amazon have grown steadily with better profit. Amazon’s retail business has gone from $19B in revenue in 2008 to $48B in 2011, passing Sears for the first time.
Sears used IMS (IBM’s first generation database product) on mainframe plus Teradata. Its ETL process using IBM DataStage software on a cluster of distributed servers took 20 hours to run. Since their adoption of Hadoop back in 2010, one of the steps (taking 10 hours out of the 20 hours) ran at 17 minutes. Their slogan is “ETL must die”, as they would like to load raw data directly to Hadoop. The old systems consisted of EMC Greenplum, Microsoft SQL Server, and Oracle Exadata (four boxes) for analytical workload. That is all being replaced by Hadoop, Datameer, MySQL, InfoBright, and Teradata.
Sears’ process for analyzing marketing campaigns for loyalty club members used to take six weeks on mainframe, Teradata, and SAS servers. The new process running on Hadoop can be completed weekly. For certain online and mobile commerce scenarios, Sears can now perform daily analyses. The Hadoop systems at 200 Terabytes cost about one-third of 200-TB relational platforms. Mainframe costs have been reduced by more than $500K per year while delivering 50-100 times better performance on batch jobs. The volume of data on Hadoop is currently at 2 Petabytes. As the CTO says, Hadoop is no longer a science project at Sears – critical reports run on the platform, including financial analyses; SEC reporting; logistics planning; and analysis of supply chains, products, and customer data. Sears uses Datameer, a spread-sheet style tool that supports data exploration and visualization directly on Hadoop. It claims to develop interactive reports in 3 days that used to take six to 12 weeks before.
Sears has actually spun off a new subsidiary called MetaScale to offer cloud services to other retailers with Hadoop platform. They are leveraging their three years of acquired expertise in Hadoop to make money in analytic services. There are many open questions on whether Hadoop will be that platform that brings big success to Sears in the future.
For further information visit: http://cloudcomputing.sys-con.com/node/2433869

Thursday, 8 November 2012

Is the Way to the European Cloud Paved Mainly with Good Intentions?

Is the Way to the European Cloud Paved Mainly with Good Intentions?
At the end of last month the EU released its plans for "Unleashing the Potential of Cloud Computing in Europe". But although the document (s) - just like EU commissioner Kroes in this video - do a good job describing in non-technical terms what cloud is and why Europe should care about having a competitive cloud position,  it kind of stops there.
Even though it defines three key actions - around Standards, Terms and Public Sector taking a lead role - most described actions consist of softer items such as  "promoting trust by coordinating with stakeholders", "identifying best practices," "promoting partnerships" and "investigating how to make  use of other available instruments." Now of course European cloud computing can benefit from funding reserved for other EU initiatives such as the Connecting Europe Facility and from side initiatives such as the "Opinion on Cloud Computing" published by the Article 29 working party that gives privacy-related contracting guidance, but in general the recent published plan seems to be more about what could and should be, than about what is or will be.
Meanwhile, both regular and social media seem to be increasingly negative regarding the progress that Europe is making. With the North American continent clearly being the biggest cloud geo and ASIAPAC - also thanks to its many emerging economies - claiming the position of fastest growing cloud geo, it only leaves less desirable labels - such as slowest or most fragmented - for describing the state of cloud activities in Europe.
Continuing to look at why things are harder and slower in Europe will just further reinforce negative sentiments, better to focus on European examples that are showing success. And in "Switch: How to Change Things When Change Is Hard"  the brothers Dan and Chip Heath offer an engaging recipe for doing just that. In their book they describe how by identifying “Bright Spots" (small pockets of positive exceptions) potential future success scenarios can be discovered. Next, they encourage promoting very specific actions instead of giving broad directions. For example: Instead of asking people to eat healthier (too vague, too hard), they suggest healthcare activists promote a specific action such as "buying skimmed instead of full fat milk" (simpler, easier, more actionable, more effective).
So in Europe, instead of pushing cloud as a concept (too vague, too hard), why not focus on identifying a few very specific and very simple scenarios including their specific benefits. Next Europe can concentrate on removing any (legal, fiscal, economic, cultural) barriers to these specific scenarios and promote these few clearly and broadly. And in doing so best to follow the Heath brothers advice to promote this both on a rational and on an emotional level (or as the brothers put it eloquently: both "Direct the Rider and Motivate the Elephant”).
P.S. What potential European cloud Bright Spots would you suggest (using the comment field on this blog)?
For further information visit: http://cloudcomputing.sys-con.com/node/2420564

Little Data, Big Data and Very Big Data (VBD) or Big BS?

Little Data, Big Data and Very Big Data (VBD) or Big BS?
This is an industry trends and perspective piece about big data and little data, industry adoption and customer deployment.
If you are in any way associated with information technology (IT), business, scientific, media and entertainment computing or related areas, you may have heard big data mentioned. Big data has been a popular buzzword bingo topic and term for a couple of years now. Big data is being used to describe new and emerging along with existing types of applications and information processing tools and techniques.
I routinely hear from different people or groups trying to define what is or is not big data and all too often those are based on a particular product, technology, service or application focus. Thus it should be no surprise that those trying to police what is or is not big data will often do so based on what their interest, sphere of influence, knowledge or experience and jobs depend on.
Not long ago while out travelling I ran into a person who told me that big data is new data that did not exist just a few years ago. Turns out this person was involved in geology so I was surprised that somebody in that field was not aware of or working with geophysical, mapping, seismic and other legacy or traditional big data. Turns out this person was basing his statements on what he knew, heard, was told about or on sphere of influence around a particular technology, tool or approach.
Fwiw, if you have not figured out already, like cloud, virtualization and other technology enabling tools and techniques, I tend to take a pragmatic approach vs. becoming latched on to a particular bandwagon (for or against) per say.
Not surprisingly there is confusion and debate about what is or is not big data including if it only applies to new vs. existing and old data. As with any new technology, technique or buzzword bingo topic theme, various parties will try to place what is or is not under the definition to align with their needs, goals and preferences. This is the case with big data where you can routinely find proponents of Hadoop and Map reduce position big data as aligning with the capabilities and usage scenarios of those related technologies for business and other forms of analytics.
Not surprisingly the granddaddy of all business analytics, data science and statistic analysis number crunching is the Statistical Analysis Software (SAS) from the SAS Institute. If these types of technology solutions and their peers define what is big data then SAS (not to be confused with Serial Attached SCSI which can be found on the back-end of big data storage solutions) can be considered first generation big data analytics or Big Data 1.0 (BD1 ;) ). That means Hadoop Map Reduce is Big Data 2.0 (BD2 ;) ;) ) if you like, or dislike for that matter.
Funny thing about some fans and proponents or surrogates of BD2 is that they may have heard of BD1 like SAS with a limited understanding of what it is or how it is or can be used. When I worked in IT as a performance and capacity planning analyst focused on servers, storage, network hardware, software and applications I used SAS to crunch various data streams of event, activity and other data from diverse sources. This involved correlating data, running various analytic algorithms on the data to determine response times, availability, usage and other things in support of modeling, forecasting, tuning and trouble shooting. Hmm, sound like first generation big data analytics or Data Center Infrastructure Management (DCIM) and IT Service Management (ITSM) to anybody?
Now to be fair, comparing SAS, SPSS or any number of other BD1 generation tools to Hadoop and Map Reduce or BD2 second generation tools is like comparing apples to oranges, or apples to pears. Let’s move on as there is much more to what is big data than simply focus around SAS or Hadoop.
This is where some interesting discussions, debates or disagreements can occur between those who latch onto or want to keep big data associated with being something new and usually focused around their preferred tool or technology. What results from these types of debates or disagreements is a missed opportunity for organizations to realize that they might already be doing or using a form of big data and thus have a familiarity and comfort zone with it.
By having a familiarity or comfort zone vs. seeing big data as something new, different, hype or full of FUD (or BS), an organization can be comfortable with the term big data. Often after taking a step back and looking at big data beyond the hype or fud, the reaction is along the lines of, oh yeah, now we get it, sure, we are already doing something like that so let’s take a look at some of the new tools and techniques to see how we can extend what we are doing.
Likewise many organizations are doing big bandwidth already and may not realize it thinking that is only what media and entertainment, government, technical or scientific computing, high performance computing or high productivity computing (HPC) does. I'm assuming that some of the big data and big bandwidth pundits will disagree, however if in your environment you are doing many large backups, archives, content distribution, or copying large amounts of data for different purposes that consume big bandwidth and need big bandwidth solutions.
Yes I know, that's apples to oranges and perhaps stretching the limits of what is or can be called big bandwidth based on somebody's definition, taxonomy or preference. Hopefully you get the point that there is diversity across various environments as well as types of data and applications, technologies, tools and techniques.
What about little data then?
I often say that if big data is getting all the marketing dollars to generate industry adoption, then little data is generating all the revenue (and profit or margin) dollars by customer deployment. While tools and technologies related to Hadoop (or Haydoop if you are from HDS) are getting industry adoption attention (e.g. marketing dollars being spent) revenues from customer deployment are growing. If little data is databases and things not generally lumped into the big data bucket, and if you think or perceive big data only to be Hadoop map reduce based data, then does that mean all the large unstructured non little data is then very big data or VBD?
For further information visit: http://cloudcomputing.sys-con.com/node/2420582

Friday, 2 November 2012

Microsoft Private Cloud 2.0 – BYOD Virtual Infrastructure

Microsoft Private Cloud 2.0 – BYOD Virtual Infrastructure

 

BYOD Private Cloud

Microsoft provides reference documents for describing their Private Cloud Fast-track best practices, which explains in detail how their core suite of Windows Server, Hyper-V and Systems Management Centre can be used to build internal ‘IaaS’ – Infrastructure as a Service.

They position this as an enabler of PaaS and SaaS, the higher layer application capabilities that this platform can better enable. In this paper Flexible Work styles they start to look at the solution areas these types of platforms can be targeted. Microsoft headline this around the transformational effect upon IT of trends like ‘BYOD’ – Bring Your Own Device, and how in enterprise IT design terms this is forcing an evolution from device-centric to user-centric architecture, and with this in mind describe the major options for mobile-enabling applications:

Device-optimized applications – For example Microsoft Office Mobile has been developed to provide Windows Phone users a tailored version of Word, Outlook, Lync etc. on their device. Web applications - Any device that can access the web can therefore access any web-based SaaS resource. This is more ubiquitous in an off-the-shelf manner but the device experience is not as good.

Virtual Desktops and Applications - The ideal compromise of the two is ‘VDI’ – Virtual Desktop Infrastructure, achieved through the Microsoft Desktop Virtualization portfolio and Microsoft Application Virtualization (App-V).

Microsoft propose this latter option is the ideal one for enterprise users because it provides an architecture and set of utilities for enhancing and managing the important aspects of this scenario, such as:

User State Virtualization - Users shifting about between different devices and networks need their personal data to travel with them, not be locked to one specific device. USV is one mechanism to support this.

Unified Management - Tools like Configuration Manager uses variables such as user identity, application dependencies and network and device characteristics to dynamically determine the appropriate deployment type for a specific device.

Unified Asset Inventory and Device Management - This also encompasses Asset and also Device Management enabling IT staff to better track and control all IT resources and where needed remotely manage them, such as performing device data wipes.

End-to-end Security - To ensure compliant protection of devices there are a variety of security features, such as drive encryption, anti-malware, tie in with IPSec for VPN security and AD-based Rights Management, offering end-to-end protection from device right through network.

For further information visit: http://cloudcomputing.sys-con.com/node/2372354

Enhancing Microsoft Exchange 2013

Enhancing Microsoft Exchange 2013

Microsoft Exchange load balancing is just the beginning…Throughout the years, F5 BIG-IP has been a critical component supporting Microsoft Exchange to implement a variety of performance, security, and architectural requirements.

During that time, we've seen Microsoft Exchange evolve itself from a fairly simple, small business solution to a robust enterprise-class solution with an integrated ecosystem of services providing for communication, collaboration, and cooperation. As Microsoft prepares to launch its latest version of Exchange, again we're seeing some evolutionary changes in its architecture. Most prominent is the elimination of a requirement for persistence; the Client Access Server (CAS) component is now a stateless proxy. For those paying attention through the years, the implementation of persistence within Microsoft Exchange deployments was more often than not architecturally designated to an F5 BIG-IP.

Does the elimination of the requirement render BIG-IP obsolete?

Of course not. While there's been some conjecture that layer 4 load balancing services will suffice for CAS 2013 (and for simple load balancing scenarios, it will) such statements are short-sighted in recognizing the increasing role of mobile and roaming clients, and the need to address core performance and security of public-facing applications (of which Exchange is certainly one). The delegation of persistence management to BIG-IP was often deemed most efficient because BIG-IP was a part of the architecture for other application delivery services – perimeter security, performance, server efficiency, multi-site resiliency, and, of course, scalability.

Scale and multi-site resiliency are imperatives today, with growth of users and devices and locations from which e-mail needs to and will be accessed. A distributed workforce can't afford to lose productivity due to slow delivery of e-mail or inability to readily access important content via any access medium, regardless of location. These are the kinds of challenges F5 BIG-IP addresses over and above routine tasks like load balancing.

These challenges have not been eliminated with Microsoft's most recent version of Exchange, and BIG-IP is still the ADC of choice for providing these services for deployments large and small. BIG-IP does layer 4 load balancing just as well as layer 7, after all, but also offers a robust set of delivery services that go well beyond either function. Ryan Korock, Technical Director focusing on Microsoft-partner initiatives, has a great list of 8 reasons why an ADC remains invaluable to Microsoft Exchange implementations that goes into more detail on what BIG-IP has – and continues – to offer Microsoft Exchange deployments.

For further information visit: http://cloudcomputing.sys-con.com/node/2372344

Gartner Highlights the Importance of Third-Party Validation

Gartner Highlights the Importance of Third-Party Validation

Gartner recently published a report that highlights the growing importance of Cloud Access Security Brokers - solution providers that offer unified cloud computing security platforms. This solution category includes a new class of products that Gartner terms Cloud Encryption Gateways, which encrypt or tokenize sensitive information before it leaves an organization's firewall. These solutions, if designed properly, allow organizations to maintain control of sensitive data since they replace the original "clear-text" values with indecipherable replacement values in the cloud. Businesses are adopting these solutions to address issues raised by data residency requirements and data privacy regulations driven by a host of industry compliance mandates. In addition to enabling organizations to satisfy the data protection needs, products like those from PerspecSys also preserve the user experience with the SaaS application (such as Salesforce.com or Oracle CRM). With PerspecSys, critical functionality like Search is retained even when strong encryption (e.g., FIPS 140-2 validated modules) or tokenization is used to protect the data being sent to the cloud.

Gartner highlights the importance of using strong tokenization capabilities that have been evaluated by an independent third party. Practitioners from the payment card industry, where I spent quite a few years, are very familiar with this requirement.

Enterprises should make sure the providers they depend on to satisfy regulatory compliance or strict data privacy and residency requirements can deliver on the expected results. One way is to look for assessments from third parties like I referenced above. Well-qualified independent auditors that use established testing and evaluation criteria can validate that solutions are doing what providers say they do. This type of assessment ought to be a no-brainer for the technology providers and is something that enterprises should expect.

What else? Well, it may seem intuitive, but another important step is to look for products that use well-vetted and accepted industry approaches. For example, within the PerspecSys solution, we made great efforts to ensure that our customers could use industry-standard cryptographic modules that they have approved based on internally established screening criteria as well as external benchmarks, such as NIST FIPS 140-2 validation. When we initially began designing our solution, we considered developing a proprietary encryption algorithm that would make it simpler for us to preserve SaaS application functionality such as "Searching" and "Sorting" on data that was encrypted inside of the cloud. Creation of such an algorithm requires the designer to tweak and modify ("weaken") a strong algorithm in order to get the desired result. But when we considered the long-term ramifications of this approach, we understood that it ran completely counter to what enterprise security organizations would (and should) expect from a solution meant to protect their most sensitive business data. Standards-based security, robust and scalable, without exception - this continues to be a central design principle that enterprise security professionals require, and what we deliver as evidenced by the award-winning PerspecSys Cloud Data Protection Gateway.

Using Deep Virtualization to Rationalize Platforms and Data Centers

Using Deep Virtualization to Rationalize Platforms and Data Centers

The latest Briefings Direct end-user case-study uncovers how outerwear and sportswear maker and distributor Columbia Sportswear has used virtualization techniques and benefits to significantly improve their business operations.

We’ll see how Columbia Sportswear’s use of deep virtualization assisted in rationalizing its platforms and data center, as well as led to benefits in their enterprise resource planning (ERP) implementation. We’ll also learn how virtualizing mission-critical applications formed a foundation for improved disaster recovery (DR) best practices.

Here are some excerpts:

Gardner: Tell me a little bit about how you got into virtualization. What were some of the requirements that you needed to fulfill at the data center level?

Leeper: Pre-2009, we'd experimented with virtualization. It'd be one of those things that I had my teams working on, mostly so we could tell my boss that we were doing it, but there wasn’t a significant focus on it. It was a nice toy to play with in the corner and it helped us in some small areas, but there were no big wins there.

Columbia Sportswear is the worldwide leader in apparel and accessories. We sell primarily outerwear and sportswear products, and a little bit of footwear, globally. We have about 4,000 employees, 50 some-odd physical locations, not counting retail, around the world. The products are primarily manufactured in Asia with sales distribution happening in both Europe and United States.

My teams out of the U.S. manage our global footprint, and we are the sole source of IT support globally from here. In mid-2009, the board of directors at Columbia decided that we, as a company, needed a much stronger DR plan. That included the construction of a new data center for us to house our production environments offsite.

Extremely successful

We were extremely successful in that process. We were able to move our primary data center over a couple of weekends with very little downtime to the end users, and that was all built on VMware technology.

About a week after we had finished that project, I got a call from our CIO, who said he had purchased a new ERP system, and Columbia was going to start down the path of a fully new ERP implementation.

I was being asked at that time what platform we should run it on, and we had a clean slate to look everywhere we could to find what our favorite, what we felt was the most safe and stable platform to run the crown jewels of the company which is ERP. For us that was going to be the SAP stack.

Private cloud

Leeper: We consider ourselves having up a private cloud on-site. My team will probably start laughing at me for using that term, but we do believe we have a very flexible and dynamic environment to deploy, based on business request on premises, and we're pretty proud of that. It works pretty well for us.

Where we go next is all over the place. One of the things we're pretty happy about is the fact that we can think about things a little differently now than probably a lot of our peers, because of how migratory our workloads can be, given the virtualization. We started looking into things like hybrid cloud approaches and the idea of maybe moving some of our workloads out of our premises, our own data facilities, to a cloud provider somewhere else.

For us, that's not necessarily the discussion around the classic public cloud strategies for scalability and some of those things. For us, it's a temporary space at times, if we are, say, moving an office, we want to be able to provide zero downtime, and we have physical equipment on-premises.

It would be nice to be able to shutdown their physical equipment, move their data, move their workloads up to a temporary spot for four or five weeks, and then bring it back at some point, and let users never see an outage while they are working from home or on the road. There are some interesting scenarios around DR for us and locations where we don't have real-time DR set up.

There are some interesting scenarios around significant DR for us and locations where we don't have real-time DR set up. For instance, we were looking into some issues in Japan, when Japan unfortunately a year or so ago was dealing with the earthquake and the tsunami fallout in power.