Human Capital Management (HCM) | |
Human capital management (HCM) is an approach to employee staffing that perceives people as assets (human capital) whose current value can be measured and whose future value can be enhanced through investment. The term human capital management can be controversial because the word "capital" has an impersonal connotation, implying that employees are simply an expensive operating cost that should be minimized whenever possible. A responsible human capital management strategy, however, is built upon the understanding that an organization's employees are its most valuable asset -- and spending time and energy on keeping records that allow managers to effectively manage staff development and promote employee engagement will help the organization achieve both its short and long-term monetary goals. Successful human capital management requires a lot of documentation and HCM software can streamline and automate many of the day-to-day record-keeping processes. When an organization evaluates an HCM system investment, it must weigh the benefits of a standalone HCM approach against those of an all-in-one enterprise resource planning (ERP) suite that includes HCM modules. In a large enterprise, having one integrated platform with a single database for everything can save on the cost of maintaining and upgrading individual software applications and application program interfaces (APIs). In a small or midsize company, however, it may just be easier to manually enter the same data into multiple systems. |
Tuesday, October 29, 2013
Human Capital Management (HCM)
Tuesday, October 15, 2013
Cloud Backup
Cloud Backup | |
Cloud backup, also known as online backup, is a strategy for backing up data that involves sending a copy of the data over a proprietary or public network to an off-site server. The server is usually hosted by a third-party service provider, who charges the backup customer a fee based on capacity, bandwidth or number of users. Online backup systems are typically built around a client software application that runs on a schedule determined by the level of service the customer has purchased. If the customer has contracted for daily backups, for instance, then the application collects, compresses, encrypts and transfers data to the service provider's servers every 24 hours. To reduce the amount of bandwidth consumed and the time it takes to transfer files, the service provider might only provide incremental backups after the initial full backup. |
Monday, October 14, 2013
Sock Puppet Marketing
Sock Puppet Marketing | |
Sock puppet marketing is the use of a false online identity to artificially stimulate demand for a product, brand or service. The false identity is called a sock puppet. A primary goal of sock puppet marketing is to increase sales by posting positive comments about a product, service or brand on web sites. Alternatively, a sock puppet might be used to post negative comments that denigrate a competitor. Sock puppet marketing and sock puppetry in general are unethical. When exposed, sock puppet marketing can damage the reputation and brand of a product or service. |
Thursday, October 10, 2013
Dynamic Pricing
Dynamic Pricing | |
Dynamic pricing, also called real-time pricing, is an approach to setting the cost for a product or service that is highly flexible. The goal of dynamic pricing is to allow a company that sells goods or services over the Internet to adjust prices on the fly in response to market demands. Changes are controlled by pricing bots, which are software agents that gather data and use algorithms to adjust pricing according to business rules. Typically, the business rules take into account such things as the time of day, day of the week, level of demand and competitors' pricing. With the advent of big data and big data analytics, business rules can be crafted to adjust prices for specific customers based on criteria such as the customer's zip code, how often the customer has made purchases in the past and how much the customer typically spends. |
Wednesday, October 9, 2013
Wireshark
Wireshark | |
Wireshark is an open source network forensics tool for profiling network traffic and analyzing packets. Such a tool is often referred to as a network analyzer, network protocol analyzer or sniffer. Wireshark is a popular tool for testing basic traffic transmission, analyzing bandwidth usage, testing application security and identifying faulty configurations. The tool is quite versatile, allowing network administrators to examine traffic details at a variety of levels. Because Wireshark is open source, its filters can be tailored to the unique needs of a specific enterprise network. |
Tuesday, October 8, 2013
Big Data Management
Big Data Management | |
Big data management is the organization, administration and governance of large volumes of both structured and unstructured data. The goal of big data management is to ensure a high level of data quality and accessibility for business purposes. By examining data from a variety of sources -- including call detail records, system logs and social media sites -- a company can gain insight into what business processes need improvement and how to gain a competitive advantage. As part of the process, the company must decide what data must be kept for compliance reasons, what data can be disposed of and what data should be kept for in-memory analysis. The process requires careful data classification so that ultimately, smaller sets of data can be worked with. |
Thursday, October 3, 2013
Application Delivery Controller
Application Delivery Controller | |
An application delivery controller (ADC) is a network device that manages client connections to complex Web and enterprise applications. In general, a controller is a hardware device or a software program that manages or directs the flow of data between two entities. An ADC essentially functions as a load balancer, optimizing end-user performance, reliability, data center resource use and security for enterprise applications. Typically, ADCs controllers are strategically placed to be a single point of control that can determine the security needs of an application and provide simplified authentication, authorization and accounting (AAA). An ADC can accelerate the performance of applications delivered over the wide area network (WAN) by implementing optimization techniques such as compression and reverse caching. With reverse caching, new user requests for static or dynamic Web objects can often be delivered from a cache in the ADC rather than having to be regenerated by the servers. |
STONITH (Shoot The Other Node In The Head)
STONITH | |
STONITH (Shoot The Other Node In The Head) is a Linux service for maintaining the integrity of nodes in a high-availability (HA) cluster. STONITH automatically powers down a node that is not working correctly. An administrator might employ STONITH if one of the nodes in a cluster can not be reached by the other node(s) in the cluster. STONITH is traditionally implemented by hardware solutions that allow a cluster to talk to a physical server without involving the operating system (OS). Although hardware-based STONITH works well, this approach requires specific hardware to be installed in each server, which can make the nodes more expensive and result in hardware vendor lock-in. A disk-based solution, such as split brain detection (SBD), can be easier to implement because this approach requires no specific hardware. |
Tuesday, October 1, 2013
Kyoto cooling
Kyoto cooling | |
Kyoto cooling, also called the Kyoto wheel, is an energy-efficient free cooling method for data centers developed in the Netherlands. Kyoto cooling uses outside air to remove the heat created by computing equipment instead of using mechanical refrigeration. Compared to the energy required by traditional computer room air conditioners, computer room air handlers and other traditional cooling methods, Kyoto cooling uses between 75% and 92% less power. Kyoto cooling is named after the Kyoto protocol, an international environmental impact agreement. The Kyoto cooling method uses a thermal wheel that contains a honeycomb lattice made out of heat-absorbent material. The wheel, which is half inside and half outside the building, removes heat from circulating air by picking up heat from the data center and then releasing it into the cooler outside air as the wheel rotates. The patented Kyoto method uses the energy transferred by the honeycomb system to run small fans that help pull air through each half of the system. It also takes advantage of the hot and cold aisle concept to completely isolate the flow of hot and cold air going to and from the wheel. |
Monday, September 30, 2013
hardware emulation
A hardware emulator is designed to simulate the workings of an entirely different hardware platform than the one it runs on. Hardware emulation is generally used to debug and verify a system under design. An administrator must use hardware emulation if he needs to run an unsupported operating system (OS) within a virtual machine (VM). In such a scenario, the virtual machine does not have direct access to server hardware. Instead, an emulation layer directs traffic between physical and virtual hardware. |
colocation (colo)
colocation (colo) | |
A colocation (colo) is a data center facility in which a business can rent space for servers and other computing hardware. Typically, a colo provides the building, cooling, power, bandwidth and physical security while the customer provides servers, storage and networking equipment. Space in the facility is often leased by the rack, cabinet, cage or room. There are several reasons a business might choose a colo over building its own data center, but one of the main drivers is the capital expenditures (CAPEX) associated with building, maintaining and updating a large computing facility. Many colos have extended their offerings to include managed services that support their customers' business initiatives. In the past, colos were often used by private enterprises for disaster recovery and redundancy. Today, colos are especially popular with cloud service providers. |
Wednesday, June 12, 2013
Thursday, March 28, 2013
Stress testing
Stress testing is the process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions. The process can involve quantitative tests done in a lab, such as measuring the frequency of errors or system crashes. The term also refers to qualitative evaluation of factors such asavailability or resistance to denial-of-service (DoS) attacks. Stress testing is often done in conjunction with the more general process of performance testing.
When conducting a stress test, an adverse environment is deliberately created and maintained. Actions involved may include:
- Running several resource-intensive applications in a single computer at the same time
- Attempting to hack into a computer and use it as a zombie to spread spam
- Flooding a server with useless e-mail messages
- Making numerous, concurrent attempts to access a single Web site
- Attempting to infect a system with viruses, Trojans, spyware or other malware.
The adverse condition is progressively and methodically worsened, until the performance level falls below a certain minimum or the system fails altogether. In order to obtain the most meaningful results, individual stressors are varied one by one, leaving the others constant. This makes it possible to pinpoint specific weaknesses and vulnerabilities. For example, a computer may have adequate memory but inadequate security. Such a system, while able to run numerous applications simultaneously without trouble, may crash easily when attacked by a hacker intent on shutting it down.
Stress testing can be time-consuming and tedious. Nevertheless, some test personnel enjoy watching a system break down under increasingly intense attacks or stress factors. Stress testing can provide a means to measure graceful degradation, the ability of a system to maintain limited functionality even when a large part of it has been compromised.
Once the testing process has caused a failure, the final component of stress testing is determining how well or how fast a system can recover after an adverse event.
Monday, March 25, 2013
FlowVisor
FlowVisor is an experimental software-defined networking (SDN) controller that enables network virtualization by dividing a physical network into multiple logical networks. FlowVisor ensures that each controller touches only the switches and resources assigned to it. It also partitions bandwidth and flow table resources on each switch and assigns those partitions to individual controllers.
FlowVisor slices a physical network into abstracted units of bandwidth, topology, traffic and network device central processing units (CPUs). It operates as a transparent proxy controller between the physical switches of an OpenFlow network and other OpenFlow controllers and enables multiple controllers to operate the same physical infrastructure, much like a server hypervisor allows multiple operating systems to use the same x86-based hardware. Other standard OpenFlow controllers then operate their own individual network slices through the FlowVisor proxy. This arrangement allows multiple OpenFlow controllers to run virtual networks on the same physical infrastructure.
The SDN research community considers FlowVisor an experimental technology, although Stanford University, a leading SDN research institution, has run FlowVisor in its production network since 2009. FlowVisor lacks some of the basic network management interfaces that would make it enterprise-grade. It currently has no command line interface or Web-based administration console. Instead, users make changes to the technology with configuration file updates.
Friday, March 22, 2013
Application Security
Application security is the use of software, hardware, and procedural methods to protect applications from external threats.
Once an afterthought in software design, security is becoming an increasingly important concern during development as applications become more frequently accessible over networks and are, as a result, vulnerable to a wide variety of threats. Security measures built into applications and a sound application security routine minimize the likelihood that unauthorized code will be able to manipulate applications to access, steal, modify, or delete sensitive data.
Actions taken to ensure application security are sometimes called countermeasures. The most basic software countermeasure is an application firewall that limits the execution of files or the handling of data by specific installed programs. The most common hardware countermeasure is a router that can prevent the IP address of an individual computer from being directly visible on the Internet. Other countermeasures include conventional firewalls, encryption/decryption programs, anti-virus programs, spyware detection/removal programs and biometric authentication systems.
Application security can be enhanced by rigorously defining enterprise assets, identifying what each application does (or will do) with respect to these assets, creating a security profile for each application, identifying and prioritizing potential threats and documenting adverse events and the actions taken in each case. This process is known as threat modeling. In this context, a threat is any potential or actual adverse event that can compromise the assets of an enterprise, including both malicious events, such as a denial-of-service (DoS) attack, and unplanned events, such as the failure of a storage device.
Wednesday, March 20, 2013
Thunderbolt
Thunderbolt (code named "Light Peak") is a high-speed, bidirectional input/output (I/O) technology that can transfer data of all types on a single cable at speeds of up to 10 Gbps (billions of bits per second). A single cable up to three meters (10 feet) long can support seven devices simultaneously in a daisy chain
According to Intel, a Thunderbolt connection can transfer 1 TB (terabyte) of data in less than five minutes and a typical high-definition (HD) video file in less than 30 seconds. The high speed and low latency make Thunderbolt ideal for backup, restore, and archiving operations. Of the seven devices (maximum) that a Thunderbolt connection can support at one time, two of them can be displays. Because of the exceptional transfer rate that Thunderbolt offers, the technology is ideal for gamers and video professionals.
The nickname "Light Peak" derives from Intel's original intent to use optical fiber cabling. However, engineers discovered that copper cables could provide up to 10 Gbps at a lower cost than optical fiber cables could do. In addition, Intel found that copper cabling could deliver up to 10 watts of power to attached devices at the requisite speeds.
Monday, March 18, 2013
Freemium
Freemium is a business model in which the owner or service provider offers basic features to users at no cost and charges a premium for supplemental or advanced features. The term, which is a combination of the words "free" and "premium," was coined by Jarid Lukin of Alacra in 2006 after venture capitalist Fred Wilson came up with the idea.
The freemium model is popular with Web 2.0 companies and Web-based e-mail services. For an enterprise to implement a freemium service, the first step is to acquire a loyal customer base. Premium features or add-ons can be offered by means of online advertising, magazine advertising, referral networks, search engine marketing and word of mouth. Services that have successfully employed the freemium model include AdAware, Flickr, Newsgator, Skype, Box.net and Webroot.
In an effective freemium service, customers find it easy to acquire the basic set of features. The premium features are typically promoted in an indirect way, avoiding "in-your-face"banners or pop-up ads. For example, an anti-spyware program can offer manual offline scanning and updates for free. If the user attempts to activate a specialized function such as continuous malware monitoring, a note appears to the effect that it is a premium feature. If the user wants to obtain that feature, the purchasing or subscription process is simple and straightforward.
Wednesday, March 13, 2013
Microtargeting
Microtargeting (also called micro-targeting or micro-niche targeting) is a marketing strategy that uses consumer data, demographics and big data anaytics to identify the interests of specific individuals or very small groups of like-minded individuals and influence their thoughts or actions. An important goal of a microtargeting initiative is to know the target audience so well that messages get delivered through the target's preferred communication channel.
In the 2012 United States Presidential campaign, microtargeting techniques were successfully used to interact with and appeal to voters on an individualized basis. To achieve this type of personalization on such a massive scale, political campaign managers collected (and continually updated) detailed information about individual voters and used predictive analytics to model voter sentiment. Understanding the voting population on an individual level enabled campaign leaders to go beyond standard political party-oriented messages and communicate with voters about specific topics in order to influence the voter's decision.
Tuesday, March 12, 2013
3Vs (volume, variety and velocity)
3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. According to the 3Vs model, the challenges of big data management result from the expansion of all three properties, rather than just the volume alone -- the sheer amount of data to be managed.
Gartner analyst Doug Laney introduced the 3Vs concept in a 2001 MetaGroup research publication, 3D data management: Controlling data volume, variety and velocity. More recently, additional Vs have been proposed for addition to the model, including variability -- the increase in the range of values typical of a large data set -- and value, which addresses the need for valuation of enterprise data.
The infographic below (reproduced with permission from Diya Soubra's post, The 3Vs that define Big Data, on Data Science Central) illustrates the increasing expansion of the 3Vs.

Monday, March 11, 2013
Spaghetti Diagram
A spaghetti diagram (sometimes called a physical process flow or a point-to-point workflow diagram) is a line-based representation of the continuous flow of some entity, such as a person, a product or a piece of information, as it goes through some process. The name comes from the resemblance of the final product to a bowl of cooked spaghetti.
Spaghetti diagrams are often used in agile project management. Unlike spaghetti code, which is a derogatory term for unstructured language coding, the term spaghetti diagram carries no negative connotation.
Friday, March 8, 2013
Shadow IT
Shadow IT is hardware or software within an enterprise that is not supported by the organization's central IT department. Although the label itself is neutral, the term often carries a negative connotation because it implies that the IT department has not approved the technology or doesn't even know that employees are using it.
In the past, shadow IT was often the result of an impatient employee's desire for immediate access to hardware, software or a specific web service without going through the necessary steps to obtain the technology through corporate channels. With the consumerization of IT and cloud computing, the meaning has expanded to include personal technology that employees use at work (see BYOD policy) or niche technology that meets the unique needs of a particular business division and is supported by a third-party service provider or in-house group, instead of by corporate IT.
Shadow IT can introduce security risks when unsupported hardware and software are not subject to the same security measures that are applied to supported technologies. Furthermore, technologies that operate without the IT department's knowledge can negatively affect the user experience of other employees by impacting bandwidth and creating situations in which network or software application protocols conflict. Shadow IT can also become a compliance concern when, for example, end users use DropBox or other free cloud storage services to store corporate data.
Feelings toward shadow IT are mixed; some IT administrators fear that if shadow IT is allowed, end users will create data silos and prevent information from flowing freely throughout the organization. Other administrators believe that in a fast-changing business world, the IT department must embrace shadow IT for the innovation it supplies and create policies for overseeing and monitoring its acceptable use.
Popular end user shadow technologies include smartphones, portable USB drives and tablets. Popular greynet applications include Gmail, instant messaging services and Skype.
Thursday, March 7, 2013
oVirt
oVirt is a project started by Red Hat Inc. to develop and promote oVirt, an open source data center virtualization platform.
oVirt,which offers large-scale, centralized management for server and desktop virtualization, was designed as an open-source alternative to VMware vCenter/vSphere. OVirt version 3.1 was released in August 2012 and features live snapshots, network adapter hot plugging, and support for accessing externally-hosted logical unit numbers (LUNs) from virtual machines (VMs).
oVirt is built upon Red Hat Enterprise Virtualization management (RHEV-M) code, thekernel-based virtual machine (KVM) hypervisor, the oVirt node for running VMs and virtualization tools such as libvirt and v2v. It can use locally attached storage, Network File System (NFS), iSCSI or Fibre Channel interfaces to communicate with host servers.
Digital CRM
The term digital CRM is often associated with the Internet of Things, a scenario in which computer processors capable of sending and receiving data are embedded in everyday objects. In such a scenario, the customer may not be human -- the customer might be a fuel tank named #54356, capable of sending an automated message to the supplier and requesting a delivery.