| Kyoto cooling | |
| Kyoto cooling, also called the Kyoto wheel, is an energy-efficient free cooling method for data centers developed in the Netherlands. Kyoto cooling uses outside air to remove the heat created by computing equipment instead of using mechanical refrigeration. Compared to the energy required by traditional computer room air conditioners, computer room air handlers and other traditional cooling methods, Kyoto cooling uses between 75% and 92% less power. Kyoto cooling is named after the Kyoto protocol, an international environmental impact agreement. The Kyoto cooling method uses a thermal wheel that contains a honeycomb lattice made out of heat-absorbent material. The wheel, which is half inside and half outside the building, removes heat from circulating air by picking up heat from the data center and then releasing it into the cooler outside air as the wheel rotates. The patented Kyoto method uses the energy transferred by the honeycomb system to run small fans that help pull air through each half of the system. It also takes advantage of the hot and cold aisle concept to completely isolate the flow of hot and cold air going to and from the wheel. |
Tuesday, October 1, 2013
Kyoto cooling
Monday, September 30, 2013
hardware emulation
A hardware emulator is designed to simulate the workings of an entirely different hardware platform than the one it runs on. Hardware emulation is generally used to debug and verify a system under design. An administrator must use hardware emulation if he needs to run an unsupported operating system (OS) within a virtual machine (VM). In such a scenario, the virtual machine does not have direct access to server hardware. Instead, an emulation layer directs traffic between physical and virtual hardware. |
colocation (colo)
| colocation (colo) | |
| A colocation (colo) is a data center facility in which a business can rent space for servers and other computing hardware. Typically, a colo provides the building, cooling, power, bandwidth and physical security while the customer provides servers, storage and networking equipment. Space in the facility is often leased by the rack, cabinet, cage or room. There are several reasons a business might choose a colo over building its own data center, but one of the main drivers is the capital expenditures (CAPEX) associated with building, maintaining and updating a large computing facility. Many colos have extended their offerings to include managed services that support their customers' business initiatives. In the past, colos were often used by private enterprises for disaster recovery and redundancy. Today, colos are especially popular with cloud service providers. |
Wednesday, June 12, 2013
Thursday, March 28, 2013
Stress testing
Stress testing is the process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions. The process can involve quantitative tests done in a lab, such as measuring the frequency of errors or system crashes. The term also refers to qualitative evaluation of factors such asavailability or resistance to denial-of-service (DoS) attacks. Stress testing is often done in conjunction with the more general process of performance testing.
When conducting a stress test, an adverse environment is deliberately created and maintained. Actions involved may include:
- Running several resource-intensive applications in a single computer at the same time
- Attempting to hack into a computer and use it as a zombie to spread spam
- Flooding a server with useless e-mail messages
- Making numerous, concurrent attempts to access a single Web site
- Attempting to infect a system with viruses, Trojans, spyware or other malware.
The adverse condition is progressively and methodically worsened, until the performance level falls below a certain minimum or the system fails altogether. In order to obtain the most meaningful results, individual stressors are varied one by one, leaving the others constant. This makes it possible to pinpoint specific weaknesses and vulnerabilities. For example, a computer may have adequate memory but inadequate security. Such a system, while able to run numerous applications simultaneously without trouble, may crash easily when attacked by a hacker intent on shutting it down.
Stress testing can be time-consuming and tedious. Nevertheless, some test personnel enjoy watching a system break down under increasingly intense attacks or stress factors. Stress testing can provide a means to measure graceful degradation, the ability of a system to maintain limited functionality even when a large part of it has been compromised.
Once the testing process has caused a failure, the final component of stress testing is determining how well or how fast a system can recover after an adverse event.
Monday, March 25, 2013
FlowVisor
FlowVisor is an experimental software-defined networking (SDN) controller that enables network virtualization by dividing a physical network into multiple logical networks. FlowVisor ensures that each controller touches only the switches and resources assigned to it. It also partitions bandwidth and flow table resources on each switch and assigns those partitions to individual controllers.
FlowVisor slices a physical network into abstracted units of bandwidth, topology, traffic and network device central processing units (CPUs). It operates as a transparent proxy controller between the physical switches of an OpenFlow network and other OpenFlow controllers and enables multiple controllers to operate the same physical infrastructure, much like a server hypervisor allows multiple operating systems to use the same x86-based hardware. Other standard OpenFlow controllers then operate their own individual network slices through the FlowVisor proxy. This arrangement allows multiple OpenFlow controllers to run virtual networks on the same physical infrastructure.
The SDN research community considers FlowVisor an experimental technology, although Stanford University, a leading SDN research institution, has run FlowVisor in its production network since 2009. FlowVisor lacks some of the basic network management interfaces that would make it enterprise-grade. It currently has no command line interface or Web-based administration console. Instead, users make changes to the technology with configuration file updates.
Friday, March 22, 2013
Application Security
Application security is the use of software, hardware, and procedural methods to protect applications from external threats.
Once an afterthought in software design, security is becoming an increasingly important concern during development as applications become more frequently accessible over networks and are, as a result, vulnerable to a wide variety of threats. Security measures built into applications and a sound application security routine minimize the likelihood that unauthorized code will be able to manipulate applications to access, steal, modify, or delete sensitive data.
Actions taken to ensure application security are sometimes called countermeasures. The most basic software countermeasure is an application firewall that limits the execution of files or the handling of data by specific installed programs. The most common hardware countermeasure is a router that can prevent the IP address of an individual computer from being directly visible on the Internet. Other countermeasures include conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal programs and biometric authentication systems.
Application security can be enhanced by rigorously defining enterprise assets, identifying what each application does (or will do) with respect to these assets, creating a security profile for each application, identifying and prioritizing potential threats and documenting adverse events and the actions taken in each case. This process is known as threat modeling. In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise, including both malicious events, such as a denial-of-service (DoS) attack, and unplanned events, such as the failure of a storage device.
Subscribe to:
Posts (Atom)