Dedicated Server

A dedicated hosting service, dedicated server, or managed hosting service is a type of Internet hosting in which the client leases an entire server not shared with anyone. This is more flexible than shared hosting, as organizations have full control over the server(s), including choice of operating system.

Managed Dedicated Server

Managed dedicated server To date, no industry standards have been set to clearly define the management role of dedicated server providers. What this means is that each provider will use industry standard terms, but each provider will define them differently. For some dedicated server providers.

SQL ServerCompact Edition

The compact edition is an embedded database engine. Unlike the other editions of SQL Server, the SQL CE engine is based on SQL Mobile (initially designed for use with hand-held devices) and does not share the same binaries.

SQl Server Architecture

When writing code for SQL CLR, data stored in SQL Server databases can be accessed using the ADO.NET APIs like any other managed application that accesses SQL Server data.

Bandwidth and Connectivity

Bandwidth refers to the data transfer rate or the amount of data that can be carried from one point to another in a given time period (usually a second) and is often represented in bits (of data) per second (bit/s).

Sunday, May 10, 2009

Home Server

Home Server

A home server is a server located in a private residence providing services to other devices inside and/or outside the household through a home network and/or the internet. Such services may include file and/or printer serving, media center serving, web serving, web caching, account authentication and backup services. Because of the relatively low number of computers on a typical home network, a home server commonly does not require significant computing power. Often, users reuse older systems, and home servers with specifications as low as 1 GHz CPU and 256 MB of RAM can be used. Large, preferably fast hard drives (ATA-100 or Serial ATA) and a network interface card are usually all the hardware required for home file serving. An uninterruptible power supply is recommended in case of power outages that can possibly corrupt data.

Commercial home server products
A common type of home server is the Plug computer form factor. Most of these are small ARM-based devices running Linux, these have an integrated AC to DC power converter and come pre-loaded with various server applications.

Operating systems
Home servers run many different operating systems. Enthusiasts who build their own home servers can use whatever OS is conveniently available or familiar to them, such as Microsoft Windows, Mac OS X, GNU/Linux, Solaris or BSD.

Services provided by home servers
Administration and configuration

Home servers often run headless, and can be administered remotely through a command shell, or graphically through a remote desktop system such as RDP, VNC, Webmin, or many others.

Some home server operating systems, such as Windows Home Server include a consumer-focused graphical user interface for setup and configuration that is available on home computers on the home network (and remotely over the Internet via remote access). Others simply enable users to use native operating system tools for configuration.

Centralized storage

Home servers often act as network-attached storage providing the major benefit that all users' files can be centrally and securely stored, with flexible permissions applied to them. Such files can be easily accessed from any other system on the network, provided the correct credentials are supplied. This also applies to shared printers.

Such files can also be shared over the internet to be accessible from anywhere in the world using remote access.

Servers running UNIX or Linux with the free Samba suite (or certain Windows Server products - Windows Home Server excluded) can provide domain control, custom logon scripts, and roaming profiles to users of certain versions of Windows[citation needed]. This allows a user to log on from any machine in the domain and have access to his/her "My Documents" and personalized Windows and application preferences - multiple accounts on each computer in the home are not needed.

Media serving
Home servers are often used to serve multi-media content, including photos, music, and video to other devices in the household (and even to the Internet; see Place Shifting, Tonido and Orb). Using standard protocols such as DLNA or proprietary systems such as iTunes users can access their media stored on the home server from any room in the house. Windows XP Media Center Edition, Windows Vista, and Windows 7 can act as a home server, supporting a particular type of media serving that streams the interactive user experience to Media Center Extenders including the Xbox 360.

A typical MythTV menu.
Windows Home Server supports media streaming to Xbox 360 and other DLNA based media receivers via the built-in Windows Media Connect technology. Some Windows Home Server device manufacturers such as Hewlett-Packard extend this functionality with a full DLNA implementation such as PacketVideo TwonkyMedia server.

On a Linux server, there are many free, open-source, fully-functional, all-in-one software solutions for media serving available. One such program is LinuxMCE, which allows other devices to boot off a hard drive image on the server, allowing them to become appliances such as set-top boxes. Amahi is a free Linux Home Server that provides shared storage, automated backups, secure VPN, and shared applications like calendar and wiki. Asterisk, Xine, MythTV (another media serving solution), VideoLAN, SlimServer, DLNA, and many other open-source projects are fully integrated for a seamless home theater/automation/telephony experience.

Because a server is typically always on, it is often a more logical choice to put a TV tuner or radio tuner for recording broadcasts into a server, than it is to use e.g. a desktop for recording, as it allows recording to be scheduled at any time.

On an Apple Macintosh server, options include iTunes, PS3 Media Server, Twonky Media Server, Nullriver MediaLink, and Elgato EyeConnect. Additionally, for Macs directly connected to TVs, the built-in FrontRow program or Boxee can act as a full-featured media center interface.

Some home servers provide remote access to media and entertainment content.

Remote access
The Webmin Interface as it would appear in a standard browser.

A home server can be used to provide remote access into the home from devices on the Internet, using remote desktop software and other remote administration software. For example, Windows Home Server provides remote access to files stored on the home server via a web interface as well as remote access to Remote Desktop sessions on PCs in the house. Similarly, Tonido provides direct access via a web browser from the internet without requiring any port forwarding or other setup. Some enthusiasts often use VPN technologies as well.

On a Linux server, two popular tools are (among many) VNC and Webmin. VNC allows clients to remotely view a server GUI desktop as if the user was physically sitting in front of the server. A GUI need not be running on the server console for this to occur; there can be multiple 'virtual' desktop environments open at the same time. Webmin allows users to control many aspects of server configuration and maintenance all from a simple web interface. Both can be configured to be accessed from anywhere on the internet.

Servers can also be accessed remotely using the command line-based Telnet and SSH protocols.

Web serving
Some users choose to run a web server in order to share files easily and publicly (or privately, on the home network). Others set up web pages and serve them straight from their home, although this may be in violation of some ISPs terms of service. Sometimes these webservers are run on a nonstandard port in order to avoid the ISP's port blocking. Example web servers used on home servers include Apache and IIS.

Many other webservers are available; see Comparison of web servers.

Web proxy
Some networks have an HTTP proxy which can be used to speed up web access when multiple users visit the same websites, and to get past blocking software while the owner is using the network of some institution that might block certain sites. Public proxies are often slow and unreliable and so it is worth the trouble of setting up one's own private proxy.

Some proxies can be configured to block websites on the local network if it is set up as a transparent proxy.

E-mail
Many home servers also run e-mail servers that handle e-mail for the owner's domain name. The advantages are having much bigger mailboxes and maximum message size than most commercial e-mail services. Access to the server, since it is on the local network is much faster than using an external service. This also increases security as e-mails do not reside on an off-site server.

BitTorrent
Home servers are ideal for utilizing the BitTorrent protocol for downloading and seeding files as some torrents can take days, or even weeks to complete and perform better on an uninterrupted connection. There are many command-line based clients such as rTorrent and web-based ones such as TorrentFlux and Tonido available for this purpose. BitTorrent also makes it easier for those with limited bandwidth to distribute large files over the internet.

Gopher
An unusual service is the Gopher protocol, a hypertext document retrieval protocol which pre-dated the World Wide Web and was popular in the early 1990s. Many of the remaining gopher servers are run off home servers utilizing PyGopherd and the Bucktooth gopher server.

Home automation
Home automation requires a device in the home that is available 24/7. Often such home automation controllers are run on a home server.

Security monitoring
Relatively low cost CCTV DVR solutions are available that allow recording of video cameras to a home server for security purposes. The video can then be viewed on PCs or other devices in the house.

A series of cheap Universal serial bus-based webcams can be connected to a home server as a makeshift CCTV system. Optionally these images and video streams can be made available over the internet using standard protocols.

Family applications
Home servers can act as a host to family oriented applications such as a family calendar, to-do lists, and message boards.

IRC and instant messaging
Because a server is always on, an IRC client or IM client running on it will be highly available to the Internet. This way, the chat client will be able to record activity that occurs even while the user is not at the computer, e.g. asleep or at work or school. Textual clients such as Irssi and tmsnc can be detached using GNU Screen for example, and graphical clients such as Pidgin can be detached using xmove. Quassel provides a specific version for this kind of use. Home servers can also be used to run personal XMPP servers and IRC servers as these protocols can support a large number of users on very little bandwidth

Online gaming
Some multiplayer games such as Continuum, and Tremulous have server software available which users may download and use to run their own private game server. Some of these servers are password protected, so only a selected group of people such as clan members can gain access to the server. Others are open for public use and may move to colocation or other forms of paid hosting if they gain a large number of players.

3rd Party Platform
Home servers often are platforms that enable 3rd party products to be built and added over time. For example Windows Home Server provides a Software Development Kit and over 60 3rd party products are available for it. Similarly Tonido provides an application platform that can be extended by writing new applications using their SDK.

Grid computing

Grid Computing

Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Although a grid can be dedicated to a specialized application, it is more common that a single grid will be used for a variety of different purposes. Grids are often constructed with the aid of general-purpose grid software libraries known as middleware.

Grid size can vary by a considerable amount. Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform very large tasks. Furthermore, “distributed” or “grid” computing, in general, is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.

Overview
Grid computing combines computers from multiple administrative domains to reach a common goal, to solve a single task, and may then disappear just as quickly.

One of the main strategies of grid computing is to use middleware to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing involves computation in a distributed fashion, which may also involve the aggregation of large-scale cluster computing-based systems.

The size of a grid may vary from small—confined to a network of computer workstations within a corporation, for example—to large, public collaborations across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to an inter-nodes cooperation".

Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform very large tasks. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.

Comparison of grids and conventional supercomputers
“Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.[citation needed]

The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which, when combined, can produce a similar computing resource as multiprocessor supercomputer, but at a lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.[citation needed] The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet.[citation needed]

There are also some differences in programming and deployment. It can be costly and difficult to write programs that can run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem, to run on multiple machines. This makes it possible to write and debug on a single conventional machine, and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.[citation needed]

Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.[citation needed]

One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.[citation needed]

Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in expected time.[citation needed]

The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.[citation needed] In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines.[citation needed] ___ Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this trade off, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).[citation needed]

Various middleware projects have created generic infrastructure to allow diverse scientific and commercial projects to harness a particular associated grid or for the purpose of setting up new grids. BOINC is a common one for various academic projects seeking public volunteers;[citation needed] more are listed at the end of the article.

In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.[citation needed]

Market segmentation of the grid computing market
For the segmentation of the grid computing market, two perspectives need to be considered: the provider side and the user side:

The provider side
The overall grid market comprises several specific markets. These are the grid middleware market, the market for grid-enabled applications, the utility computing market, and the software-as-a-service (SaaS) market.

Grid middleware is a specific software product, which enables the sharing of heterogeneous resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure of the involved company or companies, and provides a special layer placed among the heterogeneous infrastructure and the specific user applications. Major grid middlewares are Globus Toolkit, gLite, and UNICORE.

Utility computing is referred to as the provision of grid computing and applications as service either as an open grid utility or as a hosting solution for one organization or a VO. Major players in the utility computing market are Sun Microsystems, IBM, and HP.

Grid-enabled applications are specific software applications that can utilize grid infrastructure. This is made possible by the use of grid middleware, as pointed out above.

Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one or more providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS do not necessarily own the computing resources themselves, which are required to run their SaaS. Therefore, SaaS providers may draw upon the utility computing market. The utility computing market provides computing resources for SaaS providers.

The user side
For companies on the demand or user side of the grid computing market, the different segments have significant implications for their IT deployment strategy. The IT deployment strategy as well as the type of IT investments made are relevant aspects for potential grid users and play an important role for grid adoption.

CPU scavenging
CPU-scavenging, cycle-scavenging, cycle stealing, or shared computing creates a “grid” from the unused resources in a network of participants (whether worldwide or internal to an organization). Typically this technique uses desktop computer instruction cycles that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices.

Many Volunteer computing projects, such as BOINC, use the CPU scavenging model.

In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power. Heat produced by CPU power in rooms with many computers can be used for fine heating premises.[citation needed] Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies.

History
The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian Foster's and Carl Kesselman's seminal work, "The Grid: Blueprint for a new computing infrastructure" (2004).

CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.[citation needed]

The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster, Carl Kesselman, and Steve Tuecke, widely regarded as the "fathers of the grid". They led the effort to create the Globus Toolkit incorporating not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.

In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid). Indeed, grid computing is often (but not always) associated with the delivery of cloud computing systems as exemplified by the AppLogic system from 3tera.[citation needed]

Fastest virtual supercomputers
  • BOINC – 5.634 PFLOPS as of April 4th 2011.Folding@Home – 5 PFLOPS, as of March 17, 2009
  • As of April 2010[update], MilkyWay@Home computes at over 1.6 PFLOPS, with a large amount of this work coming from GPUs.
  • As of April 2010[update], SETI@Home computes data averages more than 730 TFLOPS.
  • As of April 2010[update], Einstein@Home is crunching more than 210 TFLOPS.
  • As of April 2010[update], GIMPS is sustaining 44 TFLOPS.

Current projects and applications
Grids computing offer a way to solve Grand Challenge problems such as protein folding, financial modeling, earthquake simulation, and climate/weather modeling. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility for commercial and noncommercial clients, with those clients paying only for what they use, as with electricity or water.

Grid computing is being applied by the National Science Foundation's National Technology Grid, NASA's Information Power Grid, Pratt & Whitney, Bristol-Myers Squibb Co., and American Express.[citation needed]

One of the most famous cycle-scavenging networks is SETI@home, which was using more than 3 million computers to achieve 23.37 sustained teraflops (979 lifetime teraflops) as of September 2001[update].

As of August 2009 Folding@home achieves more than 4 petaflops on over 350,000 machines.

The European Union has been a major proponent of grid computing. Many projects have been funded through the framework programme of the European Commission. Many of the projects are highlighted below, but two deserve special mention: BEinGRID and Enabling Grids for E-sciencE.[citation needed]

BEinGRID (Business Experiments in Grid) is a research project partly funded by the European commission[citation needed] as an Integrated Project under the Sixth Framework Programme (FP6) sponsorship program. Started on June 1, 2006, the project will run 42 months, until November 2009. The project is coordinated by Atos Origin. According to the project fact sheet, their mission is “to establish effective routes to foster the adoption of grid computing across the EU and to stimulate research into innovative business models using Grid technologies”. To extract best practice and common themes from the experimental implementations, two groups of consultants are analyzing a series of pilots, one technical, one business. The project is significant not only for its long duration, but also for its budget, which at 24.8 million Euros, is the largest of any FP6 integrated project. Of this, 15.7 million is provided by the European commission and the remainder by its 98 contributing partner companies.

The Enabling Grids for E-sciencE project, which is based in the European Union and includes sites in Asia and the United States, is a follow-up project to the European DataGrid (EDG) and is arguably the largest computing grid on the planet. This, along with the LHC Computing Grid[11] (LCG), has been developed to support the experiments using the CERN Large Hadron Collider. The LCG project is driven by CERN's need to handle huge amounts of data, where storage rates of several gigabytes per second (10 petabytes per year) are required. A list of active sites participating within LCG can be found online as can real time monitoring of the EGEE infrastructure. The relevant software and documentation is also publicly accessible. There is speculation that dedicated fiber optic links, such as those installed by CERN to address the LCG's data-intensive needs, may one day be available to home users thereby providing internet services at speeds up to 10,000 times faster than a traditional broadband connection.

Another well-known project is distributed.net, which was started in 1997 and has run a number of successful projects in its history.

The NASA Advanced Supercomputing facility (NAS) has run genetic algorithms using the Condor cycle scavenger running on about 350 Sun and SGI workstations.

In 2001, United Devices operated the United Devices Cancer Research Project based on its Grid MP product, which cycle-scavenges on volunteer PCs connected to the Internet. The project ran on about 3.1 million machines before its close in 2007.

As of 2011, over 6.2 million machines running the open-source Berkeley Open Infrastructure for Network Computing (BOINC) platform are members of the World Community Grid, which tops the processing power of the current fastest supercomputer system (China's Tianhe-I).

Definitions
Today there are many definitions of grid computing:
  •  In his article “What is the Grid? A Three Point Checklist”, Ian Foster lists these primary attributes:
          o Computing resources are not administered centrally.
          o Open standards are used.
          o Nontrivial quality of service is achieved.
  • Plaszczak/Wellner define grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
  • IBM defines grid computing as “the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across ‘multiple’ administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements”.
  • An earlier example of the notion of computing as utility was in 1965 by MIT's Fernando Corbató. Corbató and the other designers of the Multics operating system envisioned a computer facility operating “like a power company or water company”.
  • Buyya/Venugopal define grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".
  • CERN, one of the largest users of grid technology, talk of The Grid: “a service for sharing computer power and data storage capacity over the Internet.”
Grids can be categorized with a three stage model of departmental grids, enterprise grids and global grids. These correspond to a firm initially utilising resources within a single group i.e. an engineering department connecting desktop machines, clusters and equipment. This progresses to enterprise grids where nontechnical staff's computing resources can be used for cycle-stealing and storage. A global grid is a connection of enterprise and departmental grids that can be used in a commercial or collaborative manner.

Clustered web hosting


Clustered hosting is a type of web hosting that spreads the load of hosting across multiple physical machines ("nodes"), increasing availability and decreasing the chances of one service (e.g., FTP  or email) affecting another (e.g., MySQL). Many large websites run on clustered hosting solutions, for example, large discussion forums will tend to run using multiple front-end webservers with multiple back-end database servers.

Typically, most hosting infrastructures are based on the paradigm of using a single physical machine to host multiple hosted services, including web, database, email, FTP and others. A single physical machine is not only a single point of failure, but also has finite capacity for traffic, that in practice can be troublesome for a busy website or for a website that is experiencing transient bursts in traffic.

By clustering services across multiple hardware machines, and using load balancing you can eliminate single points of failure increasing availability of your website and other web services beyond that of ordinary single server hosting. A single server can require periodic reboots for software upgrades and the like, whereas in a clustered platform you can stagger the restarts such that the service is still available whilst still upgrading all necessary machines in the cluster.

Clustered hosting is similar to cloud hosting, in that the resources of many machines are available for a website to utilize on demand, making scalability a large advantage to a clustered hosting solution.

Cloud Computing


Cloud computing
Cloud computing refers to the provision of computational resources on demand via a computer network. Because the cloud is an underlying delivery mechanism, cloud based applications and services may support any type of software application or service in use today. Before the advent of computer networks, both data and software were stored and processed on or near the computer. The development of Local Area Networks LAN allowed for a tiered architecture in which multiple CPUs and storage devices may be organized to increase the performance of the entire system. LANs were widely deployed in corporate environments in the 1990's, and are notable for vendor specific connectivity limitations. These limitations gave rise to the marketing term "Islands of Information" which was widely used within the computing industry. The widespread implementation of the TCP/IP protocol stack and the subsequent popularization of the web has lead to multi-vendor networks that are no longer limited by company walls.

Cloud computing fundamentally allows for a functional separation between the resources used and the user's computer. The computing resources may or may not reside outside the local network, for example in an internet connected datacenter. What is important to the individual user is that they 'simply work'. This separation between the resources used and the user's computer also has allowed for the development of new business models. All of the development and maintenance tasks involved in provisioning the application are performed by the service provider. The user's computer may contain very little software or data (perhaps a minimal operating system and web browser only), serving as little more than a display terminal for processes occurring on a network of computers far away. Consumers now routinely use data intensive applications driven by cloud technology which were previously unavailable due to cost and deployment complexity. In many companies employees and company departments are bringing a flood of consumer technology into the workplace and this raises legal compliance and security concerns for the corporation.

A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is "The Cloud". The most common analogy to explain cloud computing is that of public utilities such as electricity, gas, and water. Just as centralized and standardized utilities free individuals from the difficulties of generating electricity or pumping water, cloud computing frees users from certain hardware and software installation and maintenance tasks through the use of simpler hardware that accesses a vast network of computing resources (processors, hard drives, etc.). The sharing of resources reduces the cost to individuals.

The phrase “cloud computing” originated from the cloud symbol that is usually used by flow charts and diagrams to symbolize the internet. The principle behind the cloud is that any computer connected to the internet is connected to the same pool of computing power, applications, and files. Users can store and access personal files such as music, pictures, videos, and bookmarks or play games or use productivity applications on a remote server rather than physically carrying around a storage medium such as a DVD or thumb drive. Almost all users of the internet may be using a form of cloud computing though few realize it. Those who use web-based email such as Gmail, Hotmail, Yahoo, a Company owned email, or even an e-mail client program such as Outlook, Evolution, Mozilla Thunderbird or Entourage are making use of cloud email servers. Hence, desktop applications which connect to cloud email would be considered cloud applications.

How it works
Cloud computing utilizes the network as a means to connect user end point devices (end points) to resources that are centralized in a data center. The data center may be accessed via the internet or a company network, or both. In many cases a cloud service may allow access from a variety of end points such as a mobile phone, a PC or a tablet. Cloud services may be designed to be vendor agnostic, working equally well with Linux, Mac and PC platforms. They also can allow access from any internet connected location, allowing mobile workers to access business systems remotely as in Telecommuting, and extending the reach of business services provided by Outsourcing.

A user endpoint with minimal software requirements may submit a task for processing. The service provider may pool the processing power of multiple remote computers in "the cloud" to achieve the task, such as data warehousing of hundreds of terabytes, managing and synchronizing multiple documents online, or computationally intensive work. These tasks would normally be difficult, time consuming, or expensive for an individual user or a small company to accomplish. The outcome of the processing task is returned to the client over the network. In essence, the heavy lifting of a task is outsourced to an external entity with more resources and expertise.

The services - such as data storage and processing - and software are provided by the company hosting the remote computers. The clients are only responsible for having a simple computer with a connection to the Internet, or a company network, in order to make requests to and receive data from the cloud. Computation and storage is divided among the remote computers in order to handle large volumes of both, thus the client need not purchase expensive hardware to handle the task.

Technical description
The National Institute of Standards and Technology (NIST) provides a concise and specific
Definition:
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid, where end-users consume power without needing to understand the component devices or infrastructure required to provide the service.

Cloud computing describes a new supplement, consumption, and delivery model for IT services based on Internet protocols, and it typically involves provisioning of dynamically scalable and often virtualized resources. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. This may take the form of web-based tools or applications that users can access and use through a web browser as if they were programs installed locally on their own computers.

Cloud computing providers deliver applications via the internet, which are accessed from a Web browser, while the business software and data are stored on servers at a remote location. In some cases, legacy applications (line of business applications which until now have been prevalent in thick client Windows computing) are delivered via a screen sharing technology such as Citrix XenApp, while the compute resources are consolidated at a remote data center location; in other cases entire business applications have been coded using web based technologies such as AJAX.

Most cloud computing infrastructures consist of services delivered through shared data-centers. The Cloud may appear as a single point of access for consumers' computing needs, notable examples include the iTunes Store, and the iPhone App Store. Commercial offerings may be required to meet service level agreements (SLAs), but specific terms are less often negotiated by smaller companies.

Overview
Comparisons

Cloud computing shares characteristics with:

Autonomic computing — "computer systems capable of self-management."
Client–server model – client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients).
Grid computing — "a form of distributed computing and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."
Mainframe computer — powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.
Utility computing — the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
Peer-to-peer – distributed architecture without the need for central coordination, with participants being at the same time both suppliers and consumers of resources (in contrast to the traditional client–server model).
Service-oriented computing – Cloud computing provides services related to computing while, in a reciprocal manner, service-oriented computing consists of the computing techniques that operate on software-as-a-service.

Characteristics
The key characteristic of cloud computing is that the computing is "in the cloud"; that is, the processing (and the related data) is not in a specified, known or static place(s). This is in contrast to a model in which the processing takes place in one or more specific servers that are known. All the other concepts mentioned are supplementary or complementary to this concept.

Architecture
Cloud computing sample architecture

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services and 3-tier architecture. This resembles the Unix philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.

The two most significant components of cloud computing architecture are known as the front end and the back end. The front end is the part seen by the client, i.e. the computer user. This includes the client’s network (or computer) and the applications used to access the cloud via a user interface such as a web browser. The back end of the cloud computing architecture is the ‘cloud’ itself, comprising various computers, servers and data storage devices.

History
The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.

Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture, autonomic and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them.
The underlying concept of cloud computing dates back to the 1960s, when John McCarthy opined that "computation may someday be organized as a public utility." Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility.

The actual term "cloud" borrows from telephony in that telecommunications companies, who until the 1990s primarily offered dedicated point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to utilize their overall network bandwidth more effectively. The cloud symbol was used to denote the demarcation point between that which was the responsibility of the provider, and that which was the responsibility of the user. Cloud computing extends this boundary to cover servers as well as the network infrastructure. The first scholarly use of the term “cloud computing” was in a 1997 lecture by Ramnath Chellappa.

After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006. The first exposure of the term Cloud Computing to public media is by GoogleEx CEO Eric Schmidt at SES San Jose 2006. It was reported in 2011 that Amazon has thousands of corporate customers, from large ones like Pfizer and Netflix to start-ups, Amongst them also include many corporations that live on Amazon's web services, including Foursquare, a location-based social networking site; Quora, a question-and-answer service; Reddit, a site for news-sharing and BigDoor, a maker of game tools for Web publishers.

In 2007, Google, IBM and a number of universities embarked on a large-scale cloud computing research project. In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds. In the same year, efforts were focused on providing QoS guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "[o]rganisations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to cloud computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."

Key characteristics
    This section appears to contain a large number of buzzwords. Specific concerns can be found on the Talk page. Please improve this section if you can. (May 2011)
  •  Agility improves with users' ability to rapidly and inexpensively re-provision technological infrastructure resources.
  •  Application Programming Interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.
  • Cost is claimed to be greatly reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
  • Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
          o Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
          o Peak-load capacity increases (users need not engineer for highest possible load-levels)
          o Utilization and efficiency improvements for systems that are often only 10–20% utilized.
  •  Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery.
  • Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.
  • Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
  • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems which are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer. They are easier to support and to improve, as the changes reach the clients instantly.

Layers
Once an Internet Protocol connection is established among several computers, it is possible to share services within any one of the following layers.
Cloud Computing Stack.svg
 
Cloud clients
A cloud client consists of computer hardware and/or computer software that relies on cloud computing for application delivery, or that is specifically designed for delivery of cloud services and that, in either case, is essentially useless without it. Examples include some computers, phones and other devices, operating systems and browsers.

Cloud applications
Cloud application services or "Software as a Service (SaaS)" deliver software as a service over the Internet, eliminating the need to install and run the application on the customer's own computers and simplifying maintenance and support. People tend to use the terms "SaaS" and "cloud" interchangeably, when in fact they are two different things.[citation needed] Key characteristics include:[40][clarification needed]
  •  Network-based access to, and management of, commercially available (i.e., not custom) software
  • Activities that are managed from central locations rather than at each customer's site, enabling customers to access applications remotely via the Web
  • Application delivery that typically is closer to a one-to-many model (single instance, multi-tenant architecture) than to a one-to-one model, including architecture, pricing, partnering, and management characteristics
  • Centralized feature updating, which obviates the need for downloadable patches and upgrades

 Cloud platforms
Cloud platform services or "Platform as a Service (PaaS)" deliver a computing platform and/or solution stack as a service, often consuming cloud infrastructure and sustaining cloud applications. It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers.
 
Cloud infrastructure
Cloud infrastructure services, also known as "Infrastructure as a Service" (IaaS), delivers computer infrastructure – typically a platform virtualization environment – as a service. Rather than purchasing servers, software, data-center space or network equipment, clients instead buy those resources as a fully outsourced service. Suppliers typically bill such services on a utility computing basis; the amount of resources consumed (and therefore the cost) will typically reflect the level of activity. IaaS evolved from virtual private server offerings.

Cloud infrastructure often takes the form of a tier 3 data center with many tier 4 attributes, assembled from hundreds of virtual machines.
Server

The servers layer consists of computer hardware and/or computer software products that are specifically designed for the delivery of cloud services, including multi-core processors, cloud-specific operating systems and combined offerings.
 
Deployment models
Cloud computing types
Public cloud

Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who bills on a fine-grained utility computing basis.

Community cloud
A community cloud may be established where several organizations have similar requirements and seek to share infrastructure so as to realize some of the benefits of cloud computing. The costs are spread over fewer users than a public cloud (but more than a single tenant). This option may offer a higher level of privacy, security and/or policy compliance. In addition it can be economically attractive as the resources (storage, workstations) utilized and shared in the community are already exploited and have reached their return of investment. Examples of community clouds include Google's "Gov Cloud".
 
Hybrid cloud and hybrid IT delivery
The main responsibility of the IT department is to deliver services to the business. With the proliferation of cloud computing (both private and public) and the fact that IT departments must also deliver services via traditional, in-house methods, the newest catch-phrase has become “hybrid cloud computing.” Hybrid cloud is also called hybrid delivery by the major vendors including HP, IBM, Oracle and VMware who offer technology to manage the complexity in managing the performance, security and privacy concerns that results from the mixed delivery methods of IT services.

A hybrid storage cloud uses a combination of public and private storage clouds. Hybrid storage clouds are often useful for archiving and backup functions, allowing local data to be replicated to a public cloud.

Another perspective on deploying a web application in the cloud is using Hybrid Web Hosting, where the hosting infrastructure is a mix between cloud hosting and managed dedicated servers – this is most commonly achieved as part of a web cluster in which some of the nodes are running on real physical hardware and some are running on cloud server instances.[citation needed]
Combined cloud

Two clouds that have been joined together are more correctly called a "combined cloud". A combined cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises".[53] By integrating multiple cloud services users may be able to ease the transition to public cloud services while avoiding issues such as PCI compliance.
 
Private cloud
Douglas Parkhill first described the concept of a "private computer utility" in his 1966 book The Challenge of the Computer Utility. The idea was based upon direct comparison with other industries (e.g. the electricity industry) and the extensive use of hybrid supply models to balance and mitigate risks.

"Private cloud" and "internal cloud" have been described as neologisms, but the concepts themselves pre-date the term cloud by 40 years. Even within modern utility industries, hybrid models still exist despite the formation of reasonably well-functioning markets and the ability to combine multiple providers.

Some vendors have used the terms to describe offerings that emulate cloud computing on private networks. These (typically virtualization automation) products offer the ability to host applications or virtual machines in a company's own set of hosts. These provide the benefits of utility computing – shared hardware costs, the ability to recover from failure, and the ability to scale up or down depending upon demand.

Private clouds have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept". Enterprise IT organizations use their own private cloud(s) for mission critical and other operational systems to protect critical infrastructures. Therefore, for all intents and purposes, "private clouds" are not an implementation of cloud computing at all, but are in fact an implementation of a technology subset: the basic concept of virtualized computing.
 
Cloud engineering
Cloud engineering is the application of a systematic, disciplined, quantifiable, and interdisciplinary approach to the ideation, conceptualization, development, operation, and maintenance of cloud computing, as well as the study and applied research of the approach, i.e., the application of engineering to cloud. It is a maturing and evolving discipline to facilitate the adoption, strategization, operationalization, industrialization, standardization, productization, commoditization, and governance of cloud solutions, leading towards a cloud ecosystem[further explanation needed]. Cloud engineering is also known as cloud service engineering.
Cloud storage gateway
Cloud storage is a model of networked computer data storage where data is stored on multiple virtual servers, generally hosted by third parties, rather than being hosted on dedicated servers. Hosting companies operate large data centers; and people who require their data to be hosted buy or lease storage capacity from them and use it for their storage needs. The data center operators, in the background, virtualize the resources according to the requirements of the customer and expose them as virtual servers, which the customers can themselves manage. Physically, the resource may span across multiple servers.
 
The Intercloud
The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network of networks" on which it is based. The term was first used in the context of cloud computing in 2007 when Kevin Kelly stated that "eventually we'll have the intercloud, the cloud of clouds. This Intercloud will have the dimensions of one machine comprising all servers and attendant cloudbooks on the planet.". It became popular in 2009 and has also been used to describe the datacenter of the future.

The Intercloud scenario is based on the key concept that each single cloud does not have infinite physical resources. If a cloud saturates the computational and storage resources of its virtualization infrastructure, it could not be able to satisfy further requests for service allocations sent from its clients. The Intercloud scenario aims to address such situation, and in theory, each cloud can use the computational and storage resources of the virtualization infrastructures of other clouds. Such form of pay-for-use may introduce new business opportunities among cloud providers if they manage to go beyond theoretical framework. Nevertheless, the Intercloud raises many more challenges than solutions concerning cloud federation, security, interoperability, quality of service, vendor's lock-ins, trust, legal issues, monitoring and billing.[citation needed]

The concept of a competitive utility computing market which combined many computer utilities together was originally described by Douglas Parkhill in his 1966 book, the "Challenge of the Computer Utility". This concept has been subsequently used many times over the last 40 years and is identical to the Intercloud.
 
Issues
Privacy

The cloud model has been criticized by privacy advocates for the greater ease in which the companies hosting the cloud services control, and thus, can monitor at will, lawfully or unlawfully, the communication and data stored between the user and the host company. Instances such as the secret NSA program, working with AT&T, and Verizon, which recorded over 10 million phone calls between American citizens, causes uncertainty among privacy advocates, and the greater powers it gives to telecommunication companies to monitor user activity. While there have been efforts (such as US-EU Safe Harbor) to "harmonize" the legal environment, providers such as Amazon still cater to major markets (typically the United States and the European Union) by deploying local infrastructure and allowing customers to select "availability zones."
 
Compliance
In order to obtain compliance with regulations including FISMA, HIPAA and SOX in the United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt community or hybrid deployment modes which are typically more expensive and may offer restricted benefits. This is how Google is able to "manage and meet additional government policy requirements beyond FISMA" and Rackspace Cloud are able to claim PCI compliance. Customers in the EU contracting with cloud providers established outside the EU/EEA have to adhere to the EU regulations on export of personal data.

Many providers also obtain SAS 70 Type II certification (e.g. Amazon, Salesforce.com, Google and Microsoft), but this has been criticised on the grounds that the hand-picked set of goals and standards determined by the auditor and the auditee are often not disclosed and can vary widely. Providers typically make this information available on request, under non-disclosure agreement.

Legal
In March 2007, Dell applied to trademark the term "cloud computing" (U.S. Trademark 77,139,082) in the United States. The "Notice of Allowance" the company received in July 2008 was canceled in August, resulting in a formal rejection of the trademark application less than a week later. Since 2007, the number of trademark filings covering cloud computing brands, goods and services has increased at an almost exponential rate. As companies sought to better position themselves for cloud computing branding and marketing efforts, cloud computing trademark filings increased by 483% between 2008 and 2009. In 2009, 116 cloud computing trademarks were filed, and trademark analysts predict that over 500 such marks could be filed during 2010.

Other legal cases may shape the use of cloud computing by the public sector. On October 29, 2010, Google filed a lawsuit against the U.S. Department of Interior, which opened up a bid for software that required that bidders use Microsoft's Business Productivity Online Suite. Google sued, calling the requirement "unduly restrictive of competition." Scholars have pointed out that, beginning in 2005, the prevalence of open standards and open source may have an impact on the way that public entities choose to select vendors.

Open source
Open source software has provided the foundation for many cloud computing implementations. In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLv3 intended to close a perceived legal loophole associated with free software designed to be run over a network.
 
Cloud standards
Most cloud providers expose APIs which are typically well-documented (often under a Creative Commons license) but also unique to their implementation and thus not interoperable. Some vendors have adopted others' APIs and there are a number of open standards under development, including the OGF's Open Cloud Computing Interface. The Open Cloud Consortium (OCC) is working to develop consensus on early cloud computing standards and practices.

 
Cloud computing security
The relative security of cloud computing services is a contentious issue which may be delaying its adoption. Issues barring the adoption of cloud computing are due in large part to the private and public sectors unease surrounding the external management of security based services. It is the very nature of cloud computing based services, private or public, that promote external management of provided services. This delivers great incentive amongst cloud computing service providers in producing a priority in building and maintaining strong management of secure services.

Organizations have been formed in order to provide standards for a better future in cloud computing services. One organization in particular, the Cloud Security Alliance is a non-profit organization formed to promote the use of best practices for providing security assurance within cloud computing.

Availability and performance
In addition to concerns about security, businesses are also worried about acceptable levels of availability and performance of applications hosted in the cloud.

There are also concerns about a cloud provider shutting down for financial or legal reasons, which has happened in a number of cases.
 
Sustainability and siting
Although cloud computing is often assumed to be a form of "green computing", there is as of yet no published study to substantiate this assumption. Siting the servers affects the environmental effects of cloud computing. In areas where climate favors natural cooling and renewable electricity is readily available, the environmental effects will be more moderate. Thus countries with favorable conditions, such as Finland,Sweden and Switzerland, are trying to attract cloud computing data centers.

SmartBay, marine research infrastructure of sensors and computational technology, is being developed using cloud computing, an emerging approach to shared infrastructure in which large pools of systems are linked together to provide IT services.
 
Research
A number of universities, vendors and government organizations are investing in research around the topic of cloud computing. Academic institutions include University of Melbourne (Australia), Georgia Tech, Yale, Wayne State, Virginia Tech, University of Wisconsin–Madison, Carnegie Mellon, MIT, Indiana University, University of Massachusetts, University of Maryland, IIT Bombay, North Carolina State University, Purdue University, University of California, University of Washington, University of Virginia, University of Utah, University of Minnesota, among others.

Joint government, academic and vendor collaborative research projects include the IBM/Google Academic Cloud Computing Initiative (ACCI). In October 2007 IBM and Google announced the multi- university project designed to enhance students' technical knowledge to address the challenges of cloud computing.[95] In April 2009, the National Science Foundation joined the ACCI and awarded approximately million in grants to 14 academic institutions.

In July 2008, HP, Intel Corporation and Yahoo! announced the creation of a global, multi-data center, open source test bed, called Open Cirrus, designed to encourage research into all aspects of cloud computing, service and data center management. Open Cirrus partners include the NSF, the University of Illinois (UIUC), Karlsruhe Institute of Technology, the Infocomm Development Authority (IDA) of Singapore, the Electronics and Telecommunications Research Institute (ETRI) in Korea, the Malaysian Institute for Microelectronic Systems(MIMOS), and the Institute for System Programming at the Russian Academy of Sciences (ISPRAS). In Sept. 2010, more researchers joined the HP/Intel/Yahoo Open Cirrus project for cloud computing research. The new researchers are China Mobile Research Institute (CMRI), Spain's Supercomputing Center of Galicia (CESGA by its Spanish acronym), Georgia Tech's Center for Experimental Research in Computer Systems (CERCS) and China Telecom.

In July 2010, HP Labs India announced a new cloud-based technology designed to simplify taking content and making it mobile-enabled, even from low-end devices. Called SiteonMobile, the new technology is designed for emerging markets where people are more likely to access the internet via mobile phones rather than computers[citation needed]. In November 2010, HP formally opened its Government Cloud Theatre, located at the HP Labs site in Bristol, England. The demonstration facility highlights high-security, highly flexible cloud computing based on intellectual property developed at HP Labs. The aim of the facility is to lessen fears about the security of the cloud. HP Labs Bristol is HP’s second-largest central research location and currently is responsible for researching cloud computing and security.

The IEEE Technical Committee on Services Computing in IEEE Computer Society sponsors the IEEE International Conference on Cloud Computing (CLOUD). CLOUD 2010 was held on July 5–10, 2010 in Miami, Florida

On March 23, 2011, Google, Microsoft, HP, Yahoo, Verizon, Deutsche Telekom and 17 other companies formed a nonprofit organization called Open Networking Foundation, focused on providing support for a new cloud initiative called Software-Defined Networking. The initiative is meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers and other networking areas.
 
Criticism of the term
Some have come to criticize the term as being either too unspecific or even misleading. CEO Larry Ellison of Oracle Corporation asserts that cloud computing is "everything that we already do", claiming that the company could simply "change the wording on some of our ads" to deploy their cloud-based services. Forrester Research VP Frank Gillett questions the very nature of and motivation behind the push for cloud computing, describing what he calls "cloud washing"—companies simply relabeling their products as "cloud computing", resulting in mere marketing innovation instead of "real" innovation.GNU's Richard Stallman insists that the industry will only use the model to deliver services at ever increasing rates over proprietary systems, otherwise likening it to a "marketing hype campaign".

Saturday, May 9, 2009

Virtual private server

Virtual private server
Virtual private server (VPS) is a marketing term used by Internet hosting services to refer to a virtual machine for use exclusively by an individual customer of the service. The term is used to emphasize that the virtual machine, although running in software on the same physical computer as other customers' virtual machines, is functionally equivalent to a separate physical computer, is dedicated to the individual customer's needs, has the privacy of a separate physical computer, and can be configured to run as a server computer (i.e. to run server software). The term Virtual Dedicated Server or VDS is used less often for the same concept.

Each virtual server can run its own full-fledged operating system and can be independently rebooted.

The practice of partitioning a single server so that it appears as multiple servers has long been common practice on mainframe computers and mid-range computers such as the IBM AS/400. It has become more prevalent with the development of virtualization software and technologies for microcomputers.


Overview
The physical server typically runs a hypervisor  which is tasked with creating, destroying, and managing the resources of "guest" operating systems, or virtual machines. These guest operating systems are allocated a share of resources of the physical server, typically in a manner in which the guest is not aware of any other physical resources save for those allocated to it by the hypervisor.

The Guest system may be fully virtualized, paravirtualized, or a hybrid of the two.

In a fully virtualized environment, the guest is presented with an emulated or virtualized set of hardware and is unaware that this hardware is not strictly physical. The hypervisor in this case must translate, map, and convert requests from the guest system into the appropriate resource requests on the host, resulting in significant overhead. Almost all systems can be virtualized using this method, as it requires no modification of the operating system, however a CPU supporting virtualization is required for most hypervisors that perform full virtualization.

In a paravirtualized environment, the guest is aware of the hypervisor and interfaces directly with the host system's resources, with the hypervisor implementing real-time access control and resource allocation. This results in near-native performance since the guest sees the same hardware as the host and can thus communicate with it natively. UNIX-like systems, such as Linux, some variants of BSD, Plan9, and OpenSolaris are currently known to support this method of virtualization. However, installing operating systems as paravirtualized guests tends to require more knowledge about the operating system in order to have it use special hypervisor-aware kernels and devices.

Some examples of paravirtualization-capable hypervisors are Xen, Virtuozzo, Vserver, and OpenVZ (which is the open source and development version of Parallels Virtuozzo Containers).

Hybrid or partial paravirtualization, is full virtualization, but in which the guest uses paravirtualized drivers for key components such as Networking and Disk I/O, resulting in greatly increased I/O performance. As such, it is a common solution for operating systems which cannot be modified (for various reasons) to support paravirtualiztion.

One example of a hybrid hypervisor is Kernel-based Virtual Machine.

Uses
Virtual private servers bridge the gap between shared web hosting services and dedicated hosting services, giving independence from other customers of the VPS service in software terms but at less cost than a physical dedicated server. As a VPS runs its own copy of its operating system, customers have superuser-level access to that operating system instance, and can install almost any software that runs on the OS. Certain software does not run well in a virtualized environment, such as virtualizers themselves; some VPS providers place further restrictions, but they are generally lax compared to those in shared hosting environments. Due to the number of virtualization clients typically running on a single machine, a VPS generally has limited processor time, RAM, and disk space.

Virtual private server hosting
A growing number of companies offer virtual private server hosting, or virtual dedicated server hosting as an extension for web hosting services. There are several challenges to consider when licensing proprietary software in multi-tenant virtual environments.

Unmanaged Hosting
The customer is left to monitor and administer their own server.

This type of service is generally offered with no limit on the amount of data-transferred on a fixed bandwidth line. Usually, unmetered hosting is offered with 10 Mbit/s, 100 Mbit/s or 1000 Mbit/s. This means that the customer is theoretically able to use 3.33~ TB on 10 Mbit/s, 33~ TB on 100 Mbit/s and 333~ TB on a 1000 Mbit/s line per month (although in practice the values will be significantly less).

Virtualization software
For some of the software packages commonly used to provide platform virtualization, see comparison of platform virtual machines

Cloud server
A VPS which is dynamic (that is, it can be changed at runtime) is often referred to as a cloud server. Key attributes for this are:
  • Additional hardware resources can be added at runtime (CPU, RAM)
  • Server can be moved to other hardware while the server is running (automatically according to load in some cases)

Reseller web hosting


Reseller hosting is a form of web hosting wherein the account owner has the ability to use his/her allotted hard drive space and bandwidth to host websites on behalf of third parties. The reseller purchases the host's services wholesale and then sells them to customers, possibly for a profit. A certain portion of hard drive and bandwidth is allocated to the reseller account. The reseller may rent a dedicated server from a hosting company, or resell shared hosting services. In the latter case, the reseller is simply given the permission to sell a certain amount of disk space and bandwidth to his own customers without renting a server from a web hosting company he signed for a reseller account with.

The typical web hosting reseller might be a web design firm, web developer or systems integrator who offers web hosting as an add-on service. Reseller hosting is also an inexpensive way for web hosting entrepreneurs to start a company. Most reseller hosting plans allow resellers to create their own service plans and choose their own pricing structure. In many cases, resellers are able to establish their own branding via customized control panels and name servers.

Reseller hosting does not require extensive knowledge of the technical aspects of web hosting. Usually, the data center operator is responsible for maintaining network infrastructure and hardware, and the dedicated server owner configures, secures, and updates the server. A reseller is responsible for interfacing with his/her own customer base, but any hardware, software and connectivity problems are typically forwarded to the server provider from whom the reseller plan was purchased. Being a profitable reseller firm usually involves extensive advertising to get customers. While the monthly fees with major hosts are only a few dollars a month, it's a low margin business, and resellers must devote large advertising budgets to compete with established competitors. However, web hosting is one of the biggest online businesses, because every website needs hosting.

Resellers can set up and manage customer accounts via a web interface, usually point and click "Control Panels."

Well-known Control Panels List:
  • WHM/cPanel (Unix)(Windows version coming soon)
  • Plesk (Windows/Unix)
  • DirectAdmin
  • Webmin (Unix)
  • H-sphere (H-sphere)

Twitter Delicious Facebook Digg Stumbleupon Favorites More