Friday, 7 December 2012

Utility Computing Gets Closer in the Cloud


Jack Clark at ZDnet recently published a great series of articles on the current state of cloud computing, which included an article on utility computing called “Cloud computing’s utility future gets closer“. It’s one of the best reviews of where we are in the progression toward utility computing I’ve seen recently – probably since John Cowan’s blog series on a similar topic or the GigaOm white paper by Paul Miller called Metered IT: the path to utility computing.
A few key takeways from the article:
First, Clark states the cloud is changing nearly every aspect of the technology markets and more importantly how technology is accessed and used by organizations and individuals. Completely concur. The question of “what is cloud” is getting clearer every day. Cloud computing is clearly not just a new term for an old model, but a very real shift in the way IT resources are delivered and consumed.
Clark then defines a utility market as occurring “when an item has been commoditised to the point that it becomes very hard to differentiate on a technology basis, and instead companies distinguish themselves through different levels of service, availability and support.” In some ways, we have certainly gotten to that point in cloud computing and infrastructure-as-a-service (IaaS) as you see AWS adding new service levels, Google slashing prices, and Amazon responding with further price cuts, but keep in mind this focuses on the supply side of the equation – only half of the equation. There is also the demand side of the equation, which is more important given the fact that there are hundreds of IaaS suppliers out there today with many adding capacity as we speak. So how does that demand get fulfilled and what’s missing?
Clark then turns to what’s missing. Among other things… “There is not yet a clear market mechanism for homogenising compute and storage from different providers making them truly interchangeable.” Absolutely agree and this is the primary reason 6fusion created the WAC to create a single unit of measure for the measurement and metering of IT resource consumption across any environment – public, private, virtual or physical. One thing Clark didn’t mention was the fundamental need for a single unit of measure being a foundational component of any utility – think kilowatts in electricity or kilogram in coal.
What else is missing? “A trading methodology” according to Clark. I would add a trading platform is missing as well – or more specifically a marketplace of cloud brokers that serve as intermediaries between the supply and the demand of computing. As a further example, we agree with James Mitchell from Strategic Blue that “at the moment, cloud pricing is not rational”. To borrow another quote from Mitchell: ““can you imagine letting your electricity supplier bill you for your electricity using a measurement that they have made, using a meter that they invented, and then quoting it to you in a unit that they have pulled out of thin air, that cannot be compared to their competitors?  Ridiculous!” A single unit of measure and trading methodology and platform are critical here.
Clark concludes with this: “As cloud computing continues on its path to become a utility, the benefits to IT consumers will grow as prices are successively cut, but companies that cannot operate at the necessary scale of a utility are likely to run into problems.” Agreed – we are in the very early stages of utility computing and the land grab has just begun. However, the beauty of a commoditized utility is that anyone that can participate, both on the supply and demand side. You are likely to see a long tail market emerge with some very large players at the top (AWS, Google, Microsoft, etc) and many mid-size and smaller players survive in the market with access to global demand through utility computing exchanges when the immovable asset becomes movable.
What are your thoughts? How close are we to utility computing? What are the big barriers to getting there from your perspective? The post Utility computing gets closer in the cloud appeared first on 6fusion.
For further information visit: http://cloudcomputing.sys-con.com/node/2466924

The Limits of Cloud: Gratuitous ARP and Failover


Cloud is great at many things. At other things, not so much. Understanding the limitations of cloud will better enable a successful migration strategy. One of the truisms of technology is that takes a few years of adoption before folks really start figuring out what it excels at – and conversely what it doesn't. That's generally because early adoption is focused on lab-style experimentation that rarely extends beyond basic needs.
It's when adoption reaches critical mass and folks start trying to use the technology to implement more advanced architectures that the "gotchas" start to be discovered.
Cloud is no exception.
A few of the things we've learned over the past years of adoption is that cloud is always on, it's simple to manage, and it makes applications and infrastructure services easy to scale. Some of the things we're learning now is that cloud isn't so great at supporting application mobility, monitoring of deployed services and at providing advanced networking capabilities.
The reason that last part is so important is that a variety of enterprise-class capabilities we've come to rely upon are ultimately enabled by some of the advanced networking techniques cloud simply does not support. Take gratuitous ARP, for example. Most cloud providers do not allow or support this feature which ultimately means an inability to take advantage of higher-level functions traditionally taken for granted in the enterprise – like failover.
GRATUITOUS ARP and ITS IMPLICATIONS
For those unfamiliar with gratuitous ARP let's get you familiar with it quickly. A gratuitous ARP is an unsolicited ARP request made by a network element (host, switch, device, etc…) to resolve its own IP address. The source and destination IP address are identical to the source IP address assigned to the network element. The destination MAC is a broadcast address. Gratuitous ARP is used for a variety of reasons. For example, if there is an ARP reply to the request, it means there exists an IP conflict. When a system first boots up, it will often send a gratuitous ARP to indicate it is "up" and available. And finally, it is used as the basis for load balancing failover.
Most cloud environments do not allow broadcast traffic of this nature. After all, it's practically guaranteed that you are sharing a network segment with other tenants, and thus broadcasting traffic could certainly disrupt other tenant's traffic. Additionally, as security minded folks will be eager to remind us, it is fairly well-established that the default for accepting gratuitous ARPs on the network should be "don't do it".
The astute observer will realize the reason for this; there is no security, no ability to verify, no authentication, nothing. A network element configured to accept gratuitous ARPs does so at the risk of being tricked into trusting, explicitly, every gratuitous ARP – even those that may be attempting to fool the network into believing it is a device it is not supposed to be.
That, in essence, is ARP poisoning, and it's one of the security risks associated with the use of gratuitous ARP. Granted, someone needs to be physically on the network to pull this off, but in a cloud environment that's not nearly as difficult as it might be on a locked down corporate network. Gratuitous ARP can further be used to execute denial of service, man in the middle and MAC flooding attacks. None of which have particularly pleasant outcomes, especially in a cloud environment where such attacks would be against shared infrastructure, potentially impacting many tenants.
Thus cloud providers are understandably leery about allowing network elements to willy-nilly announce their own IP addresses. That said, most enterprise-class network elements have implemented protections against these attacks precisely because of the reliance on gratuitous ARP for various infrastructure services. Most of these protections use a technique that will tentatively accept a gratuitous ARP, but not enter it in its ARP cache unless it has a valid IP-to-MAC mapping, as defined by the device configuration. Validation can take the form of matching against DHCP-assigned addresses or existence in a trusted database.
Obviously these techniques would put an undue burden on a cloud provider's network given that any IP address on a network segment might be assigned to a very large set of MAC addresses. Simply put, gratuitous ARP is not cloud-friendly, and thus it is you will be hard pressed to find a cloud provider that supports it.
What does that mean?
That means, ultimately, that failover mechanisms in the cloud cannot be based on traditional techniques unless a means to replicate gratuitous ARP functionality without its negative implications can be designed.
Which means, unfortunately, that traditional failover architectures – even using enterprise-class load balancers in cloud environments – cannot really be implemented today? What that means for IT preparing to migrate business critical applications and services to cloud environments is a careful review of their requirements and of the cloud environment's capabilities to determine whether availability and uptime goals can – or cannot – be met using a combination of cloud and traditional load balancing services.
For further information visit: http://cloudcomputing.sys-con.com/node/2469233

Wednesday, 5 December 2012

Cloud Developers Challenged to Build Next Generation xRTML 3.0 Apps


Realtime, creator of the leading global technology framework and applications to power the Realtime web, announced a competition for developers to submit their xRTML 3.0 apps before a distinguished panel of judges, including developer Peter Lubbers of Google and Sam Wierema of TheNextWeb. The latest release of xRTML 3.0, the eXtensible Realtime Multiplatform Language, is transforming the World Wide Web into the Realtime Web.
The new xRTML 3.0 flattens the learning curve for building next-generation apps with bidirectional Realtime communications. Its instantaneous updates employ a fraction of the bandwidth required of traditional request/response and near-real time technologies. Major enhancements include:
 • A more robust, coherent and flexible core framework.
 • Provision for the use of multiple versions.
 • Beta release of a storage layer with built-in connection and security protocol, and provision of read/write permissions.
 • A new templating system focused on data rather than form, making it an invaluable tool for data-dependent applications.
 • An inheritance model to simplify processes and make it easier to extend tags or create new ones from scratch.
 • Metadata to provide a virtual roadmap within your browser.
For further information visit: http://cloudcomputing.sys-con.com/node/2463061

Gartner Highlights Cloud Data Encryption Gateways


Last month Gartner Analyst Jay Heiser conducted an extremely informative and thought-provoking webinar entitled "The Current and Future State of Cloud Security, Risk and Privacy." During the presentation, Mr. Heiser highlighted what he called the "Public Cloud Risk Gap," characterized in part by inadequate processes and technologies by the cloud service providers and in part by a lack of diligence and planning by enterprises using public cloud applications. In many ways, it was a call to arms to ensure that adequate controls, thought and preparation are put to use before public clouds are adopted by enterprises and public sector organizations.
From the side of the cloud application provider, the webinar noted that most cloud service offerings are incomplete when measured against traditional "on-premise" security standards, there are relatively few security-related Service Level Agreements (SLAs), and there is minimal transparency on the security posture of most cloud services. From the enterprise side (the cloud service consumer), he points out that they frequently come to the table with inadequate planning and consideration in the area of security requirements definition and have an incomplete data sensitivity classification governing their data assets. Despite this, the webinar highlighted that organizations of all sizes are increasingly willing to place their data externally, and they are increasingly likely to have at least some formalized processes for the assessment of the associated risk - which is good news.
One innovative part of this new category of solutions is referred to by Gartner as "Cloud Encryption Gateways." These gateways put sensitive data control back into the hands of the enterprise in scenarios where they are using public cloud services. When designed and deployed correctly, they are able to preserve the end user's experience with the cloud application (think of things like "Search" and "Reporting") even while securing the data being processed and stored in the cloud. These Gateways intercept sensitive data while it is still on-premise and replace it with a random tokenized or strongly encrypted value, rendering it meaningless should anyone hack the data while it is in transit, processed or stored in the cloud. If encryption is used, the enterprise controls the key. If tokenization is used, the enterprise controls the token vault. But not all gateways are created equal, so please refer to this recent paper in our Knowledge Center to make sure you ask the right questions when determining which gateway is the right fit for your specific Security, IT and End User needs.

Facebook to roll out HTTPS by default to all users


IDG News Service - Facebook started encrypting the connections of its North American users by default last week as part of a plan to roll out always-on HTTPS (Hypertext Transfer Protocol Secure) to its entire global user base.
For the past several years, security experts and privacy advocates have called on Facebook to enable always-on HTTPS by default because the feature prevents account hijacking attacks over insecure networks and also stops the governments of some countries from spying on the Facebook activities of their residents.
Despite the feature's security benefits, Facebook announced the start of its HTTPS rollout in a post on its Developer Blog last week, and not through its security page or its newsroom."As announced last year, we are moving to HTTPS for all users," Facebook platform engineer Shireesh Asthana said Thursday in a blog post that also described many other platform changes and bug fixes relevant to developers. "This week, we're starting to roll out HTTPS for all North America users and will be soon rolling out to the rest of the world."
It's not clear when exactly the rollout for the rest of the world will start. "We have no dates to provide at this time, but we will be continuing with a global rollout in the near future," said Facebook spokesman Fred Wolens Tuesday via email. The Electronic Frontier Foundation (EFF), digital rights organizations, welcomed the move via Twitter on Monday describing it as a "huge step forward for encrypting the web."
The EFF has long been a proponent of always-on HTTPS adoption. In collaboration with the Tor Project, creator of the Tor anonymizing network and software, the EFF maintains a browser extension called HTTP Everywhere that forces always-on HTTPS connections by default on websites that only support the feature on an opt-in basis. Twitter, Gmail and other Google services have HTTPS already turned on by default.
Facebook launched always-on HTTPS as an opt-in feature for users in January 2011. However, the initial implementation was lacking because whenever users launched a third-party application that didn't support HTTPS on the website, the entire Facebook connection was switched back to HTTP.
In order to address this problem, in May 2011 Facebook asked all platform application developers to acquire SSL certificates and make their apps HTTPS-compatible by Oct. 1 that same year.
"It is far from a simple task to build out this capability for the more than a billion people that use the site and retain the stability and speed we expect, but we are making progress daily towards this end," Wolens said. "We have already deployed significant performance enhancements to our load balancing infrastructure to mitigate most of the impact of moving to HTTPS, and will be continuing this work as we deploy this feature. In the meantime, we have been working with developers to ensure that their third-party applications are transitioned to HTTPS, and most have already completed this process."

Symantec spots odd malware designed to corrupt databases


- Symantec had spotted another odd piece of malware that appears to be targeting Iran and is designed to meddle with SQL databases.
The company discovered the malware, called W32.Narilam, on Nov. 15 but on Friday published a more detailed writeup by Shunichi Imano. Narilam is rated as a "low risk" by the company, but according to a map, the majority of infections are concentrated in Iran, with a few in the U.K., the continental U.S. and the state of Alaska.
Interestingly, Narilam shares some similarities with Stuxnet, the malware targeted at Iran that disrupted its uranium refinement capabilities by interfering with industrial software that ran its centrifuges. Like Stuxnet, Narilam is also a worm, spreading through removable drives and network file shares, Imano wrote.
Once on a machine, it looks for Microsoft SQL databases. It then hunts for specific words in the SQL database -- some of which are in Persian, Iran's main language -- and replaces items in the database with random values or deletes certain fields. Some of the words include "hesabjari," which means current account; "pasandaz," which means savings; and "asnad," which means financial bond, Imano wrote.
"The malware does not have any functionality to steal information from the infected system and appears to be programmed specifically to damage the data held within the targeted database," Imano wrote. "Given the types of objects that the threat searches for, the targeted databases seem to be related to ordering, accounting, or customer management systems belonging to corporations."
The types of databases sought by Narilam are unlikely to be employed by home users. But Narilam could be a headache for companies that use SQL databases but do not keep backups."The affected organization will likely suffer significant disruption and even financial loss while restoring the database," Imano wrote. "As the malware is aimed at sabotaging the affected database and does not make a copy of the original database first, those affected by this threat will have a long road to recovery ahead of them."
Stuxnet is widely believed to have been created by the U.S. and Israel with the intent of slowing down Iran's nuclear program. Since its discovery in June 2010, researchers have linked it to other malware including Duqu and Flame, indicating a long-running espionage and sabotage campaign that has prompted concern over escalating cyber conflict between nations.

Monday, 3 December 2012

Security in the Public Cloud Is a Shared Responsibility


How to secure your applications running in the Amazon Public Cloud
When you host applications in the public cloud, you assume partial responsibility for securing the application. The cloud provider, for example Amazon Web Services (AWS), secures the physical data center (with locked badge entry doors, fences, guards etc) in addition to securing the physical network with perimeter firewalls. This is no significant change from how you secure your corporate datacenter.
Just like you enhance the security of physical and virtual servers in your datacenter with host-based firewalls (ip tables, Windows firewall), anti-virus and intrusion detection, so you must protect your public cloud servers (in AWS parlance - "instances") with similar security measures. This is the joint or shared security responsibility - AWS secures the physical datacenter and firewalls the network; you the AWS customer secures each instance and its application with host-based firewalls , anti-virus and intrusion detection. In addition if your public cloud applications must be compliant, perhaps with PCI regulations, then you can add file integrity monitoring and log file monitoring to each AWS instance.
Security is shared; no blame goes around....Watch a quick demo how to enhance the security of your AWS instances and applications.
For further information visit: http://cloudcomputing.sys-con.com/node/2459176

Cloud 2.0: Re-inventing CRM


Cloud computing isn’t just re-inventing technology
Cloud computing isn’t just re-inventing technology; it will also drive evolution of the business practices that the technology is used for.
For example CRM: Customer Relationship Management. This is a science that started as simple contact management apps, like ACT!, through Goldmine then of course Salesforce.com.
After their ASP (Application Service Provider) phase of the Cloud evolution we’ve since had the social media explosion and so the principle category to add is “social media CRM”. After that came Cloud and so we’re now at a phase best described as Cloud 2.0.This is most powerfully demonstrated by the public sector, where CRM is about ‘citizen engagement’ and where the core expression of the model can be referenced through ‘CORE’ design, standing for Community Oriented Re-Engineering.
In short this reflects the simple point that online is about communities, and how you re-engineer your business processes to harness this principle is the fundamental nature of this CORE design and therefore how it can be used to implement a Cloud 2.0 strategy.
I have been so excited about the recent Canada Health Info way publication because they also reference the term Cloud 2.0, this design principle and most importantly, map it to possible action areas for the Canadian eHealth sector:
For further information visit: http://cloudcomputing.sys-con.com/node/2463839