Tuesday, October 8, 2013

Big Data Management

Big Data Management
Big data management is the organization, administration and governance of large volumes of both structured and unstructured data.
The goal of big data management is to ensure a high level of data quality and accessibility for business purposes. By examining data from a variety of sources -- including call detail records, system logs and social media sites -- a company can gain insight into what business processes need improvement and how to gain a competitive advantage.
As part of the process, the company must decide what data must be kept for compliance reasons, what data can be disposed of and what data should be kept for in-memory analysis. The process requires careful data classification so that ultimately, smaller sets of data can be worked with.

Thursday, October 3, 2013

Application Delivery Controller

Application Delivery Controller
An application delivery controller (ADC) is a network device that manages client connections to complex Web and enterprise applications. In general, a controller is a hardware device or a software program that manages or directs the flow of data between two entities.
An ADC essentially functions as a load balancer, optimizing end-user performance, reliability, data center resource use and security for enterprise applications. Typically, ADCs controllers are strategically placed to be a single point of control that can determine the security needs of an application and provide simplified authentication, authorization and accounting (AAA).
An ADC can accelerate the performance of applications delivered over the wide area network (WAN) by implementing optimization techniques such as compression and reverse caching. With reverse caching, new user requests for static or dynamic Web objects can often be delivered from a cache in the ADC rather than having to be regenerated by the servers.

STONITH (Shoot The Other Node In The Head)

STONITH
STONITH (Shoot The Other Node In The Head) is a Linux service for maintaining the integrity of nodes in a high-availability (HA) cluster.
STONITH automatically powers down a node that is not working correctly. An administrator might employ STONITH if one of the nodes in a cluster can not be reached by the other node(s) in the cluster.
STONITH is traditionally implemented by hardware solutions that allow a cluster to talk to a physical server without involving the operating system (OS). Although hardware-based STONITH works well, this approach requires specific hardware to be installed in each server, which can make the nodes more expensive and result in hardware vendor lock-in. A disk-based solution, such as split brain detection (SBD), can be easier to implement because this approach requires no specific hardware. 

Tuesday, October 1, 2013

Kyoto cooling

Kyoto cooling
Kyoto cooling, also called the Kyoto wheel, is an energy-efficient free cooling method for data centers developed in the Netherlands.
Kyoto cooling uses outside air to remove the heat created by computing equipment instead of using mechanical refrigeration. Compared to the energy required by traditional computer room air conditioners, computer room air handlers and other traditional cooling methods, Kyoto cooling uses between 75% and 92% less power. Kyoto cooling is named after the Kyoto protocol, an international environmental impact agreement.
The Kyoto cooling method uses a thermal wheel that contains a honeycomb lattice made out of heat-absorbent material. The wheel, which is half inside and half outside the building, removes heat from circulating air by picking up heat from the data center and then releasing it into the cooler outside air as the wheel rotates. The patented Kyoto method uses the energy transferred by the honeycomb system to run small fans that help pull air through each half of the system. It also takes advantage of the hot and cold aisle concept to completely isolate the flow of hot and cold air going to and from the wheel.

Monday, September 30, 2013

hardware emulation

Hardware Emulation
Hardware emulation is the use of one hardware device to mimic the function of another hardware device.
A hardware emulator is designed to simulate the workings of an entirely different hardware platform than the one it runs on. Hardware emulation is generally used to debug and verify a system under design.
An administrator must use hardware emulation if he needs to run an unsupported operating system (OS) within a virtual machine (VM). In such a scenario, the virtual machine does not have direct access to server hardware. Instead, an emulation layer directs traffic between physical and virtual hardware.

colocation (colo)

colocation (colo)
A colocation (colo) is a data center facility in which a business can rent space for servers and other computing hardware.
Typically, a colo provides the building, cooling, power, bandwidth and physical security while the customer provides servers, storage and networking equipment. Space in the facility is often leased by the rack, cabinet, cage or room.
There are several reasons a business might choose a colo over building its own data center, but one of the main drivers is the capital expenditures (CAPEX) associated with building, maintaining and updating a large computing facility. Many colos have extended their offerings to include managed services that support their customers' business initiatives.
In the past, colos were often used by private enterprises for disaster recovery and redundancy. Today, colos are especially popular with cloud service providers.