Monday, September 30, 2013

hardware emulation

Hardware Emulation
Hardware emulation is the use of one hardware device to mimic the function of another hardware device.
A hardware emulator is designed to simulate the workings of an entirely different hardware platform than the one it runs on. Hardware emulation is generally used to debug and verify a system under design.
An administrator must use hardware emulation if he needs to run an unsupported operating system (OS) within a virtual machine (VM). In such a scenario, the virtual machine does not have direct access to server hardware. Instead, an emulation layer directs traffic between physical and virtual hardware.

colocation (colo)

colocation (colo)
A colocation (colo) is a data center facility in which a business can rent space for servers and other computing hardware.
Typically, a colo provides the building, cooling, power, bandwidth and physical security while the customer provides servers, storage and networking equipment. Space in the facility is often leased by the rack, cabinet, cage or room.
There are several reasons a business might choose a colo over building its own data center, but one of the main drivers is the capital expenditures (CAPEX) associated with building, maintaining and updating a large computing facility. Many colos have extended their offerings to include managed services that support their customers' business initiatives.
In the past, colos were often used by private enterprises for disaster recovery and redundancy. Today, colos are especially popular with cloud service providers.

Thursday, March 28, 2013


Stress testing

Stress testing is the process of determining the ability of a computer, networkprogram or device to maintain a certain level of effectiveness under unfavorable conditions. The process can involve quantitative tests done in a lab, such as measuring the frequency of errors or system crashes. The term also refers to qualitative evaluation of factors such asavailability or resistance to denial-of-service (DoS) attacks. Stress testing is often done in conjunction with the more general process of performance testing.
When conducting a stress test, an adverse environment is deliberately created and maintained. Actions involved may include:
  • Running several resource-intensive applications in a single computer at the same time
  • Attempting to hack into a computer and use it as a zombie to spread spam
  • Flooding a server with useless e-mail messages
  • Making numerous, concurrent attempts to access a single Web site
  • Attempting to infect a system with viruses, Trojans, spyware or other malware.
The adverse condition is progressively and methodically worsened, until the performance level falls below a certain minimum or the system fails altogether. In order to obtain the most meaningful results, individual stressors are varied one by one, leaving the others constant. This makes it possible to pinpoint specific weaknesses and vulnerabilities. For example, a computer may have adequate memory but inadequate security. Such a system, while able to run numerous applications simultaneously without trouble, may crash easily when attacked by a hacker intent on shutting it down.
Stress testing can be time-consuming and tedious. Nevertheless, some test personnel enjoy watching a system break down under increasingly intense attacks or stress factors. Stress testing can provide a means to measure graceful degradation, the ability of a system to maintain limited functionality even when a large part of it has been compromised.
Once the testing process has caused a failure, the final component of stress testing is determining how well or how fast a system can recover after an adverse event.

Monday, March 25, 2013


FlowVisor
FlowVisor is an experimental software-defined networking (SDNcontroller that enables network virtualization by dividing a physical network into multiple logical networks. FlowVisor ensures that each controller touches only the switches and resources assigned to it. It also partitions bandwidth and flow table resources on each switch and assigns those partitions to individual controllers. 
FlowVisor slices a physical network into abstracted units of bandwidth, topology, traffic and network device central processing units (CPUs).  It operates as a transparent proxy controller between the physical switches of an OpenFlow network and other OpenFlow controllers and enables multiple controllers to operate the same physical infrastructure, much like a server hypervisor allows multiple operating systems to use the same x86-based hardware. Other standard OpenFlow controllers then operate their own individual network slices through the FlowVisor proxy. This arrangement allows multiple OpenFlow controllers to run virtual networks on the same physical infrastructure. 
The SDN research community considers FlowVisor an experimental technology, although Stanford University, a leading SDN research institution, has run FlowVisor in its production network since 2009. FlowVisor lacks some of the basic network management interfaces that would make it enterprise-grade. It currently has no command line interface or Web-based administration console. Instead, users make changes to the technology with configuration file updates.

Friday, March 22, 2013


Application Security


Application security is the use of software, hardware, and procedural methods to protect applications from external threats.
Once an afterthought in software design, security is becoming an increasingly important concern during development as applications become more frequently accessible over networks and are, as a result, vulnerable to a wide variety of threats. Security measures built into applications and a sound application security routine minimize the likelihood that unauthorized code will be able to manipulate applications to access, steal, modify, or delete sensitive data.
Actions taken to ensure application security are sometimes called countermeasures. The most basic software countermeasure is an application firewall that limits the execution of files or the handling of data by specific installed programs. The most common hardware countermeasure is a router that can prevent the IP address of an individual computer from being directly visible on the Internet. Other countermeasures include conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal programs and biometric authentication systems.
Application security can be enhanced by rigorously defining enterprise assets, identifying what each application does (or will do) with respect to these assets, creating a security profile for each application, identifying and prioritizing potential threats and documenting adverse events and the actions taken in each case. This process is known as threat modeling. In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise, including both malicious events, such as a denial-of-service (DoS) attack, and unplanned events, such as the failure of a storage device.

Wednesday, March 20, 2013


Thunderbolt 

Thunderbolt (code named "Light Peak") is a high-speed, bidirectional input/output (I/O) technology that can transfer data of all types on a single cable at speeds of up to 10 Gbps (billions of bits per second). A single cable up to three meters (10 feet) long can support seven devices simultaneously in a daisy chain

According to Intel, a Thunderbolt connection can transfer 1 TB (terabyte) of data in less than five minutes and a typical high-definition (HD) video file in less than 30 seconds. The high speed and low latency make Thunderbolt ideal for backup, restore, and archiving operations. Of the seven devices (maximum) that a Thunderbolt connection can support at one time, two of them can be displays. Because of the exceptional transfer rate that Thunderbolt offers, the technology is ideal for gamers and video professionals.

The nickname "Light Peak" derives from Intel's original intent to use optical fiber cabling. However, engineers discovered that copper cables could provide up to 10 Gbps at a lower cost than optical fiber cables could do. In addition, Intel found that copper cabling could deliver up to 10 watts of power to attached devices at the requisite speeds.