Thursday, March 28, 2013


Stress testing

Stress testing is the process of determining the ability of a computer, networkprogram or device to maintain a certain level of effectiveness under unfavorable conditions. The process can involve quantitative tests done in a lab, such as measuring the frequency of errors or system crashes. The term also refers to qualitative evaluation of factors such asavailability or resistance to denial-of-service (DoS) attacks. Stress testing is often done in conjunction with the more general process of performance testing.
When conducting a stress test, an adverse environment is deliberately created and maintained. Actions involved may include:
  • Running several resource-intensive applications in a single computer at the same time
  • Attempting to hack into a computer and use it as a zombie to spread spam
  • Flooding a server with useless e-mail messages
  • Making numerous, concurrent attempts to access a single Web site
  • Attempting to infect a system with viruses, Trojans, spyware or other malware.
The adverse condition is progressively and methodically worsened, until the performance level falls below a certain minimum or the system fails altogether. In order to obtain the most meaningful results, individual stressors are varied one by one, leaving the others constant. This makes it possible to pinpoint specific weaknesses and vulnerabilities. For example, a computer may have adequate memory but inadequate security. Such a system, while able to run numerous applications simultaneously without trouble, may crash easily when attacked by a hacker intent on shutting it down.
Stress testing can be time-consuming and tedious. Nevertheless, some test personnel enjoy watching a system break down under increasingly intense attacks or stress factors. Stress testing can provide a means to measure graceful degradation, the ability of a system to maintain limited functionality even when a large part of it has been compromised.
Once the testing process has caused a failure, the final component of stress testing is determining how well or how fast a system can recover after an adverse event.

Monday, March 25, 2013


FlowVisor
FlowVisor is an experimental software-defined networking (SDNcontroller that enables network virtualization by dividing a physical network into multiple logical networks. FlowVisor ensures that each controller touches only the switches and resources assigned to it. It also partitions bandwidth and flow table resources on each switch and assigns those partitions to individual controllers. 
FlowVisor slices a physical network into abstracted units of bandwidth, topology, traffic and network device central processing units (CPUs).  It operates as a transparent proxy controller between the physical switches of an OpenFlow network and other OpenFlow controllers and enables multiple controllers to operate the same physical infrastructure, much like a server hypervisor allows multiple operating systems to use the same x86-based hardware. Other standard OpenFlow controllers then operate their own individual network slices through the FlowVisor proxy. This arrangement allows multiple OpenFlow controllers to run virtual networks on the same physical infrastructure. 
The SDN research community considers FlowVisor an experimental technology, although Stanford University, a leading SDN research institution, has run FlowVisor in its production network since 2009. FlowVisor lacks some of the basic network management interfaces that would make it enterprise-grade. It currently has no command line interface or Web-based administration console. Instead, users make changes to the technology with configuration file updates.

Friday, March 22, 2013


Application Security


Application security is the use of software, hardware, and procedural methods to protect applications from external threats.
Once an afterthought in software design, security is becoming an increasingly important concern during development as applications become more frequently accessible over networks and are, as a result, vulnerable to a wide variety of threats. Security measures built into applications and a sound application security routine minimize the likelihood that unauthorized code will be able to manipulate applications to access, steal, modify, or delete sensitive data.
Actions taken to ensure application security are sometimes called countermeasures. The most basic software countermeasure is an application firewall that limits the execution of files or the handling of data by specific installed programs. The most common hardware countermeasure is a router that can prevent the IP address of an individual computer from being directly visible on the Internet. Other countermeasures include conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal programs and biometric authentication systems.
Application security can be enhanced by rigorously defining enterprise assets, identifying what each application does (or will do) with respect to these assets, creating a security profile for each application, identifying and prioritizing potential threats and documenting adverse events and the actions taken in each case. This process is known as threat modeling. In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise, including both malicious events, such as a denial-of-service (DoS) attack, and unplanned events, such as the failure of a storage device.

Wednesday, March 20, 2013


Thunderbolt 

Thunderbolt (code named "Light Peak") is a high-speed, bidirectional input/output (I/O) technology that can transfer data of all types on a single cable at speeds of up to 10 Gbps (billions of bits per second). A single cable up to three meters (10 feet) long can support seven devices simultaneously in a daisy chain

According to Intel, a Thunderbolt connection can transfer 1 TB (terabyte) of data in less than five minutes and a typical high-definition (HD) video file in less than 30 seconds. The high speed and low latency make Thunderbolt ideal for backup, restore, and archiving operations. Of the seven devices (maximum) that a Thunderbolt connection can support at one time, two of them can be displays. Because of the exceptional transfer rate that Thunderbolt offers, the technology is ideal for gamers and video professionals.

The nickname "Light Peak" derives from Intel's original intent to use optical fiber cabling. However, engineers discovered that copper cables could provide up to 10 Gbps at a lower cost than optical fiber cables could do. In addition, Intel found that copper cabling could deliver up to 10 watts of power to attached devices at the requisite speeds.

Monday, March 18, 2013


Freemium
Freemium is a business model in which the owner or service provider offers basic features to users at no cost and charges a premium for supplemental or advanced features. The term, which is a combination of the words "free" and "premium," was coined by Jarid Lukin of Alacra in 2006 after venture capitalist Fred Wilson came up with the idea.
The freemium model is popular with Web 2.0 companies and Web-based e-mail services. For an enterprise to implement a freemium service, the first step is to acquire a loyal customer base. Premium features or add-ons can be offered by means of online advertising, magazine advertising, referral networks, search engine marketing and word of mouth. Services that have successfully employed the freemium model include AdAware, Flickr, Newsgator, Skype, Box.net and Webroot.
In an effective freemium service, customers find it easy to acquire the basic set of features. The premium features are typically promoted in an indirect way, avoiding "in-your-face"banners or pop-up ads. For example, an anti-spyware program can offer manual offline scanning and updates for free. If the user attempts to activate a specialized function such as continuous malware monitoring, a note appears to the effect that it is a premium feature. If the user wants to obtain that feature, the purchasing or subscription process is simple and straightforward.

Wednesday, March 13, 2013


Microtargeting

Microtargeting (also called micro-targeting or micro-niche targeting) is a marketing strategy that uses consumer data, demographics and big data anaytics to identify the interests of specific individuals or very small groups of like-minded individuals and influence their thoughts or actions. An important goal of a microtargeting initiative is to know the target audience so well that messages get delivered through the target's preferred communication channel.
In the 2012 United States Presidential campaign, microtargeting techniques were successfully used to interact with and appeal to voters on an individualized basis. To achieve this type of personalization on such a massive scale, political campaign managers collected (and continually updated) detailed information about individual voters and used predictive analytics to model voter sentiment. Understanding the voting population on an individual level enabled campaign leaders to go beyond standard political party-oriented messages and communicate with voters about specific topics in order to influence the voter's decision.

Tuesday, March 12, 2013


3Vs (volume, variety and velocity)
3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. According to the 3Vs model, the challenges of big data management result from the expansion of all three properties, rather than just the volume alone -- the sheer amount of data to be managed.
Gartner analyst Doug Laney introduced the 3Vs concept in a 2001 MetaGroup research publication, 3D data management: Controlling data volume, variety and velocity. More recently, additional Vs have been proposed for addition to the model, including variability -- the increase in the range of values typical of a large data set -- and value, which addresses the need for valuation of enterprise data.
The infographic below (reproduced with permission from Diya Soubra's post, The 3Vs that define Big Data, on Data Science Central) illustrates the increasing expansion of the 3Vs.
The 3Vs of big data

Monday, March 11, 2013


Spaghetti Diagram

A spaghetti diagram (sometimes called a physical process flow or a point-to-point workflow diagram) is a line-based representation of the continuous flow of some entity, such as a person, a product or a piece of information, as it goes through some process. The name comes from the resemblance of the final product to a bowl of cooked spaghetti.

Spaghetti diagrams are often used in agile project management. Unlike spaghetti code, which is a derogatory term for unstructured language coding, the term spaghetti diagram carries no negative connotation.

Friday, March 8, 2013


Shadow IT 

Shadow IT is hardware or software within an enterprise that is not supported by the organization's central IT department. Although the label itself is neutral, the term often carries a negative connotation because it implies that the IT department has not approved the technology or doesn't even know that employees are using it.

In the past, shadow IT was often the result of an impatient employee's desire for immediate access to hardware, software or a specific web service without going through the necessary steps to obtain the technology through corporate channels. With the consumerization of IT and cloud computing, the meaning has expanded to include personal technology that employees use at work (see BYOD policy) or niche technology that meets the unique needs of a particular business division and is supported by a third-party service provider or in-house group, instead of by corporate IT.

Shadow IT can introduce security risks when unsupported hardware and software are not subject to the same security measures that are applied to supported technologies. Furthermore, technologies that operate without the IT department's knowledge can negatively affect the user experience of other employees by impacting bandwidth and creating situations in which network or software application protocols conflict. Shadow IT can also become a compliance concern when, for example, end users use DropBox or other free cloud storage services to store corporate data.

Feelings toward shadow IT are mixed; some IT administrators fear that if shadow IT is allowed, end users will create data silos and prevent information from flowing freely throughout the organization. Other administrators believe that in a fast-changing business world, the IT department must embrace shadow IT for the innovation it supplies and create policies for overseeing and monitoring its acceptable use.

Popular end user shadow technologies include smartphones, portable USB drives and tablets. Popular greynet applications include Gmail, instant messaging services and Skype.

Thursday, March 7, 2013


oVirt
oVirt is a project started by Red Hat Inc. to develop and promote oVirt, an open source data center virtualization platform. 
oVirt,which offers large-scale, centralized management for server and desktop virtualization, was designed as an open-source alternative to VMware vCenter/vSphere. OVirt version 3.1 was released in August 2012 and features live snapshots, network adapter hot plugging, and support for accessing externally-hosted logical unit numbers (LUNs) from virtual machines (VMs).
oVirt is built upon Red Hat Enterprise Virtualization management (RHEV-M) code, thekernel-based virtual machine (KVM) hypervisor, the oVirt node for running VMs and virtualization tools such as libvirt and v2v. It can use locally attached storage, Network File System (NFS), iSCSI or Fibre Channel interfaces to communicate with host servers.

Digital CRM

Digital customer relationship management is the use of Internet communications channels and smart technologies to enhance customer relationship management (CRM) and customer experience management (CEM) initiatives. Digital CRM seeks to gather real-time data in order to provide an organization with a clear picture of each customer's habits and preferences and facilitate automated messaging and personalization.

The term digital CRM is often associated with the Internet of Things, a scenario in which computer processors capable of sending and receiving data are embedded in everyday objects. In such a scenario, the customer may not be human -- the customer might be a fuel tank named #54356, capable of sending an automated message to the supplier and requesting a delivery.