April 2016 Special Offer!

Monday, 25 April, 2016

Special Offer

So What is this Open Source Stuff?

Thursday, 21 April, 2016

open source software training

As most people know by now, the Linux operating system has been developed under the philosophy of Open Source software originally pioneered by the Free Software Foundation as “free software”. Nevertheless, many people don’t truly appreciate just what Open Source really is. In this blog post, I’ll offer my perceptions.

Quite simply, Open Source is based on the notion that software should be freely available: to use, to modify, to copy.  The idea has been around for some twenty years in the technical culture that built the Internet and the World Wide Web and in recent years has spread to the commercial world.

There are a number of misconceptions about the nature of Open Source software.  Perhaps the best way to explain what it is, is to start by talking about what it isn’t.

  • Open Source is not shareware.  A precondition for the use of shareware is that you pay the copyright holder a fee.  Open source code is freely available and there is no obligation to pay for it.
  • Open Source is not Public Domain.  Public domain code, by definition, is not copyrighted.  Open Source code is copyrighted by its author who has released it under the terms of an Open Source software license.  The copyright owner thus gives you the right to use the code provided you adhere to the terms of the license.
  • Open Source is not necessarily free of charge.  Having said that there’s no obligation to pay for Open Source software doesn’t preclude you from charging a fee to package and distribute it.  A number of companies are in the specific business of selling packaged “distributions” of Linux.

Why would you pay someone for something you can get for free?  Presumably because everything is in one place and you can get some support from the vendor.  Of course the quality of support greatly depends on the vendor.

So “free” refers to freedom to use the code and not necessarily zero cost.  As someone said a number of years ago, “Think ‘free speech’, not ‘free beer’”.

Open Source code is:

  • Subject to the terms of an Open Source license, in many cases the GNU Public License (see below).
  • Subject to critical peer review.  As an Open Source programmer, your code is out there for everyone to see and the Open Source community tends to be a very critical group.  Open Source code is subject to extensive testing and peer review.  It’s a Darwinian process in which only the best code survives.  “Best” of course is a subjective term.  It may be the best technical solution but it may also be completely unreadable.
  • Highly subversive.  The Open Source movement subverts the dominant paradigm, which says that intellectual property such as software must be jealously guarded so you can make a lot of money off of it.  In contrast, the Open Source philosophy is that software should be freely available to everyone for the maximum benefit of society.  Richard Stallman, founder of the Free Software Foundation, is particularly vocal in advocating that software should not have owners (see Appendix C).

In the early years of the Open Source movement, Microsoft and other proprietary software vendors saw it as a serious threat to their business model.  Microsoft representatives went so far as to characterize Open Source as “un-American”.  A Microsoft executive publicly stated in 2001 that “open source is an intellectual property destroyer. I can’t imagine something that could be worse than this for the software business and the intellectual-property business.”

In recent years however, leading software vendors, including Microsoft, have embraced the Open Source movement. Many even give their programmers and engineers company time to contribute to the Open Source community.  And it’s not just charity, it’s good business!

So what is an Open Source license? Most End User License Agreements (EULA) for software are specifically designed to restrict what you are allowed to do with the software covered by the license.  Typical restrictions prevent you from making copies or otherwise redistributing it.  You are often admonished not to attempt to “reverse-engineer” the software.

By contrast, an Open Source license is intended to guarantee your rights to use, modify and copy the subject software as much as you’d like.  Along with the rights comes an obligation.  If you modify and subsequently distribute software covered by an Open Source license, you are obligated to make available the modified source code under the same terms.  The changes become a “derivative work” which is also subject to the terms of the license.  This allows other users to understand the software better and to make further changes if they wish.

Arguably the best-known, and most widely used, Open Source license is the GNU General Public License (GPL) first released by the Free Software Foundation (FSF) in 1989.  The Linux kernel is licensed under the GPL.  But the GPL has a problem that makes it unworkable in many commercial situations.  Software that does nothing more than link to a library released under the GPL is considered a derivative work and is therefore subject to the terms of the GPL and must be made available in source code form. Software vendors who wish to maintain their applications as proprietary have a problem with that.

To get around this, and thus promote the development of Open Source libraries, the Free Software Foundation came up with the “Library GPL”.  The distinction is that a program linked to a library covered by the LGPL is not considered a derivative work and so there’s no requirement to distribute the source, although you must still make available the source to the library itself.

Subsequently, the LGPL became known as the “Lesser GPL” because it offers less freedom to the user.  So while the LGPL makes it possible to develop proprietary products using Open Source software, the FSF encourages developers to place their libraries under the GPL in the interest of maximizing openness.

At the other end of the scale is the Berkeley Software Distribution (BSD) license, which predates the GPL by some 12 years.  It “suggests”, but does not require, that source code modifications be returned to the developer community and it specifically allows derived products to use other licenses, including proprietary ones.

Other licenses—and there are quite a few—fall somewhere between these two poles. The Mozilla Public License (MPL) for example, developed in 1998 when Netscape made its browser open-source, contains more requirements for derivative works than the BSD license, but fewer than the GPL or LGPL.  The Eclipse Public License (EPL) specifically allows “plug-ins” to remain proprietary, but still requires that modifications to Eclipse itself be Open Source. The Open Source Initiative (OSI), a non-profit group that certifies licenses meeting its definition of Open Source, currently lists 79 certified licenses on its website.

You may tempted to think that the GPL is just an academic exercise. Nobody takes it seriously, right? Wrong! There are people, the “GPL police” if you will, some of whom have way too much time on their hands, and they take the GPL very seriously. They will “out” anyone who doesn’t play by the rules and there are examples of vendors who have been taken to court as a result.

Bottom line; if you’re concerned about keeping your code proprietary, be very careful about where your models come from.  Don’t blindly copy large chunks of code that is identified as GPL Use the code as a model and write your own. If your product is going to incorporate Open Source code, you  may want to consult an attorney who specializes in intellectual property law related to Open Source.

Well, this has been a brief personal tour through the world of Open Source software. Not surprisingly, there are a lot of other resources out there on the web. Just google “open source software”.

This article by Doug Abbott is on Open Source Software.

Developing Critical Systems — Is Testing Enough?

Monday, 28 March, 2016

Introduction

Software is everywhere. A lot of it works well all day long. Some of it is terrible. Some of it can kill you.

This article is about critical software, the stuff that really needs to work, and that can have significant consequences if it doesn’t.

devices

There are three trends I have noticed in software organizations:

  1. The desire to get software into more critical systems (e.g., medical, automotive, transportation, finance and aviation).
  2. Software organizations are either serious about quality or hopeful. There isn’t much in between.
  3. For the latter, there is only a vague consideration that current engineering practices should improve when risk increases. It is almost assumed that if software is called “critical,” then it will work, and if it doesn’t, a few more weeks of testing will fix it.

The “just test more” approach works fine until someone is hurt, a contract is lost, or there is serious legal action.

Doing it

Writing software is hard, and writing critical software is harder because there are numerous scenarios that the software has to react to. The increase in risk should cause an increase in better development practices to mitigate the new risk.

The typical (and not so great) approach to improve quality is to:

  • Test more and longer.
  • Assume that if the system passes the tests then it must work.
  • Downplay upfront practices such as requirements, design, good coding practices and peer reviews since they are not coding.

The trouble with the “test more and longer” approach is that if some of the upfront practices were not done, then testing is just a poke in the dark. That is, the testers have no clear picture of what conditions to test for, or when they should be done.

But the tests pass, so it must be OK?

It is wonderful that the (limited) test cases passed (in the limited schedule-crunched time you had for testing). However, let us dig deeper:

  • Do the test cases cover all of the likely functions, system scenarios and user scenarios?
  • Do the test cases cover every line of code so that you know for sure that some untested conditional loop doesn’t cause a system failure.
  • Did anyone look at the code to see that, although it passed the (limited) test cases, the call to “calculate-stuff(input)” will crash the system if the input is zero (when the year is an even number).
  • Is the code a huge spaghetti mess that no one actually understands what it does? If a large plate of critical spaghetti code doesn’t make you or your management nervous, you might be dead!

A slightly different approach

In a previous blog I listed some standard quality activities for any type of organization that can be applied selectively to high-risk areas. Those were:

  • Peer reviews of requirements, design information and interfaces
  • Peer reviews of code and interface definitions
  • Peer reviews of test cases and test procedures
  • Prototypes and simulation
  • Component testing
  • Code coverage checks to determine the code has been tested
  • Process audits to maintain the adoption of the organization’s best practices
  • Integration testing
  • Analysis of defect statistics to determine product state and areas for further investigation
  • System and acceptance testing using the intended environment, user-oriented requirements and exception conditions

quality-hazards-risks
Here are some additional ones if you are in the “This-critical-system-really-must-work” business.

  • Definition of requirement quality attributes to define hard quality expectations (e.g., reliability, performance, accuracy, fault tolerance).
  • Tracing requirements to test cases to know for sure that the system actually does what it is defined to do.
  • Peer review and test of new code, reused code, and “cool code we found on the internet.” Do you really know what you have? If no one has looked, then you don’t know.
  • Design for reliability to add characteristics ensuring that defined run periods are met (e.g., a fail-safe recovery vs. a blue screen after 1,000 hours).
  • Test coverage analysis to know what has been tested.
  • Defect density analysis to understand quality trends and hot spots.
  • Hazard and risk analysis of critical functions.

For software organizations that have no design, few requirements, no peer reviews, no traceability and no code coverage analysis, all bets are off.

What you can do

Writing reliable critical code is not easy, and applying the quality practices listed above can be overwhelming. To start, identify between 5 percent and 20 percent of the system to investigate. Here are some example criteria to identify initial system areas to focus on:

  • The most critical to the program’s operation
  • The most used (and therefore visible) section in the product
  • The most costly if defects were to exist
  • The most error-prone section based on current defect data
  • The least well-known section
  • The most frequently changed (and therefore high-risk) section

Not moved yet? Keep reading

Here are two short articles that provide some examples to ponder:

This article by Neil Potter is on Developing Critical Systems.

The Future of an Oracle Certified Professional

Monday, 29 February, 2016

Planning to get Oracle® certified? Well done! You have made the right decision – it will make you stand out in the highly competitive market. Apart from getting broader access to the industry’s most challenging opportunities, certifications provided by Oracle® Corporation demonstrates your knowledge and skills required to support core products available in Oracle®. With the increasing competition, employers are practicing the cherry picking policy where they are selecting the best cherry from the lot.

When you become a Certified Professional recognized by Oracle®, you demonstrate understanding of the skills that are required by professionals to get fit in the chosen role. IT professionals who are accredited with OCA have a competitive and distinct advantage over other IT aspirants.

With each and every step that you make towards gaining knowledge and a set of skills, you are entitled to a certification that:

  • Accelerates your professional development
  • Improvises your level of productivity
  • Enhances your credibility

In addition, certification enables the companies to hire proven performers that justify the company’s investment in the Oracle® technology. Scenario-based tests included in certification of Oracle® give an effective assessment analysis of problem-solving ability and hands-on expertise. The certification will enable the employers to recognize your skills, knowledge to install & configure, and maintain the database; thereby, adding value to your career growth.

Oracle offers different certification tracks including Oracle Certified Professional (OCP) and Oracle Certified Associate (OCA). Each of these certifications hold a distinct advantage, highlights your achievements and identifies you as a valuable asset to the company. Survey reviews and statistical analysis conducted by WordPress and Payscale for professionals certified by Oracle® shows that:

  • 82% of OCP’s realized a major acceleration in their earning capability.
  • 42% of OCP’s elaborated that if their employer would pay for their accreditation, they would get it.
  • 90% of OCP’s agreed that they have been able to improve their job prospects.
  • 89% of OCP’s gave a consent that they consider themselves better qualified in managing and tend to stay with that company executing complex issues and projects.

Below are some of the major benefits credited with the OCA DBA 11g/12c certification provided by Oracle®:

  • Proven skills and expertise of an IT Professional
  • Ability to handle massive and continually expanding necessities of modern organizations
  • Complete knowledge of database backup & recovery, creating & maintaining data and preparing the database environment

Top Practices Used for Data Center Cooling

Monday, 22 February, 2016

Top practices used for Data Center Cooling

Data Center Cooling is the practice of maintaining ideal operating temperature at the data center facility by using collective equipments, tools, techniques and processes. Temperature alarms are set at Data Center facilities which can alert the concerned people about abrupt rise in temperature. For proper functioning of the data center facility, certain practices are recommended for cooling maintenance and dealing with changes in temperature.

Data Center Cooling System

The system for Data Center Cooling basically consists of three components namely Infrastructure such as air conditioners, air ducts, cooling towers etc, Cooling Management Software and temperature monitoring equipments and processes.

Best Practices

Detection

Detect the IT load in Kilowatts and measure the intake temperature across the center. Detect the hot spots which show a considerable temperature increase. After detection, measure the airflow volume for each cooling unit and record the supply and return temperatures for each unit for determining the sensible cooling load.

Comparison

Compare the cooling load with the IT load detected in the previous step. By using the airflow return and air temperature measured, determine the sensible capacity of each unit in Kilowatts. Determine the maximum allowable intake temperature for the operating environment after the through comparison.

Actions to be taken

Cable management

Seal the vertical fronts, cable holes and air leakage to manage airflow in the operating center. Also ensure that the cable holes in the raised floors are sealed so that airflow leakage is minimized.

Aisle Management

Hot and cold aisle management is to be ensured to manage airflow. Relocate all perforated floor tiles to the cold aisle. Also ensure the alignment in the placement of ceiling return grilles with the hot aisles.

Reduce number of cooling units

Estimate the number of required cooling units. This can be done by dividing the IT load with the smallest sensible cooling unit capacity. After estimation, reduce the number of cooling units by turning off the not required ones. The cooling units with the lowest sensible load should be identified and turned off.

This process is done to ensure that cooling standards are maintained along with optimum power efficiency.

Maintain the desired temperature

The best practice is to keep the temperature in the data center regulated so that too high or low temperatures don’t prevail. By comparing and monitoring the temperature, the desired level temperature can be ensured. Also temperature alarms prove to be a very useful tool in alerting and ensuring temperature maintenance.

Top 10 Best Practices used for Energy Efficiency

Thursday, 18 February, 2016

Top 10 Best Practices used for Energy Efficiency

Energy efficiency is the basic requirement to ensure optimum functionality of a Data Center. Certain practices to monitor energy consumption and avoiding energy wastage are the secrets of running a successful data center facility. The top ten have been discussed as follows.

  • Supplemental Load Reduction

The secondary load contributors in a data center are supplemental load resources. The supplemental load can be reduced by following methods like reducing the energy use of equipments and upgrading the building’s roofing and insulation.

  • Lighting

Maintain the lighting to the level required in the center. Upgrade the center’s system with energy efficient light sources which consume less power.

  • Industrial Refrigeration

The industrial refrigeration system if used in data centers can be beneficial as it won’t allow the systems to overheat. Along with ensuring data safety, this system reduces energy and operation costs.

  • Restructure the Air Distribution System

It has been found that oversized fans consume a least 10 percent more power. The center’s air distribution system should be equipped the right sized fans that consume lesser power.

  • Supply Air Control

The supply air control system requires additional supply air temperature sensors as the air conditioners don’t come equipped with these. This system thus regulates and monitors the temperature of the air being supplied. This implementation is much faster and efficient than other traditional systems. 

  • Variable Speed Fan Drives

Power consumption by a fan is high and is a function of a cube of the fan’s speed. Retrofitting any constant speed cooling fan to variable speeds or by replacing legacy units with new units with built in variable speed capability can reduce power consumption and increase efficiency.

  • Retro Commissioning

Commissioning ensures that computer systems are designed, installed and their functionalities and capabilities are tested in accordance to the data center’s operational needs. Retro commissioning is a similar process of system reviewing alignment and optimization but this process takes place at a later point of the center’s life cycle. It ensures recalibrating the systems to perform more efficiently, thus saving power.

  • Heating and Cooling Equipment Upgrades

Certain heating and cooling systems consume more power than required. It is always wise to choose the right kind of equipments which are not oversized, are energy efficient and do not consume more power than required.

  • Increase Chilled Water Temperature

One of the largest power consumers in a data center is a chilled water system. Increasing the temperature of the chilled water system by raising the air handler temperature set point can help to reduce energy consumption.

  • Pumping Systems

Pumping actions like replacing throttling valves with speed controls, reducing speed for fixed load, installing a parallel system for high variable loads and replacing motor or a pump with a more efficient model all come with energy savings from 10 to 60 percent.

IT Asset Management and Its Role

Monday, 15 February, 2016

IT Asset Management and role it plays

IT Asset Management and Its Role

The IT department forms a major division of an enterprise. IT Asset management plays a crucial strategic role in the profitable functioning of the IT department. Hardware and software form the main two components of this department. Managing hardware and software inventory, their purchases and redistribution is handled by this set of business practices. The two forms of IT Asset Management are below.

Hardware Asset Management

This section deals with managing the physical components of computer devices in the industry, starting from their purchase to disposal. This includes determining the life cycle of hardware components and their disposability, and hence, estimating the requirement of new devices. Industrial processes like approval of purchasing of new hardware, procuring process and life cycle management all comes under this section.

Software Asset Management

In a similar manner, procuring and discarding software like programming language versions, anti-virus versions and licenses are handled under this section. Software asset management is a more frequent process than the previous one, as software upgrades need to be met on a regular basis in the industry.

Role of IT Asset Management

  • Life cycle Management

Life cycle Management deals with determining the life cycle of hardware and software components and when the old ones should be disposed off and new components to be purchased. This is a complex process done with the integration with the management and procurement divisions of the organization. Responsibilities include development of policies and measurements regarding the life cycle of both hardware and software components.

  • Risk Management

This division comes under Life cycle management and involves managing system issues, purchase costs, compliance and business policies. It mainly aims at minimizing risk so that the cost-effectiveness regarding purchases and disposal of hardware and software is maintained.

  • Integrated Software Solutions

In order to integrate itself with the other departments of the organization, the IT Asset Management division has deployed the system of Integrated Software Solutions in order to work with all related departments for functions related to IT assets like procurement, deployment and expense reporting.

  • Software protection

Regarding the protection of software versions in the organization, this department focuses upon organization’s software up gradation with the latest anti-virus protection to detect malware and viruses. Thus, it helps to preserve the software assets of the organization, hence ensuring that the software functionality standards are maintained.

Python and its Data Structures

Thursday, 11 February, 2016

Python and its Data Structures

Overview of Python

Python, a programming language that was developed in the late 1980s, is named after its developer, Monty Python. One of the advantages of using Python is that a program can be written using fewer code lines, as compared to other programming languages, such as C++ or Java. In addition to this, other advantages that Python provides is it is freely available, is open-source software, can be used on different Operating Systems, is Object-Oriented, and has automatic Memory Management.

Overview of Data Structures

Each program has some variable values that need to be stored during the execution of a program. Data structures can be defined as containers that are used to store these variable values having same methods for manipulating the available data.  Some of the reasons to use data structures are:

  • Efficient problem-solving
  • Focus on main problems without getting into detail
  • Consistent way
  • Information is securely stored
  • Sorted data

Types of Data Structures

Different types of available Python Data Structures are:

  • Strings: Most commonly used data structure that is written using either single or double quotes. Further, using the built-in strings, many operations can be easily performed. You can also index and extract program slices by using the subscript notation.
  • Lists: The most systematized, generalized, and schematic data structures. Using a list, various different types of Python objects, such as numbers, functions, strings, can be stored in a sequenced manner. Some of the common list data types are:
    • list.append: Adds item to end of the list
    • list.insert: Inserts an item in the list
    • list.remove: Removes an item from the list
    • list.extend: Extends the list by appending it
    • list.sort: Sorts the list according to the requirement
    • list.count: Returns the number a particular type appears
  • Dictionaries: These are one of the important data types that are been used in Python. Data in a Dictionary values are entered in the key-value pair enclosed within the {} braces. Dictionaries are used in cases of a logical association, fast data lookup, or when constant data modification takes place.
  • Sets: These are the data structures that are used for storing an unordered collection having no duplicate values. Primary function of creating sets is to perform membership testing and eliminating duplicate values.
  • Tuples: These are the data structures that are defined as immutable lists in which data values are separated by commas. Immutable represents that you cannot delete/ edit/ add the tuple value.

Getting Acquainted with a Java Thread

Monday, 8 February, 2016

Getting-Acquainted-with-a-Java-Thread-V-1

Multithreaded programs can be developed using the Java Programming language.  By a multithread program, it means a program comprises of two or more actions, where each action performs different tasks simultaneously. Multithreading can also be said as a synonym for multitasking where each thread corresponds to each task. The benefits of multithreaded program are that multiple activities run parallel and use the same program.

What is a Thread?

A thread in a Java program in its simplest definition is a sequential flow of statements that defines an execution path for a program.  A Java program should contain at least one thread that is the main thread, which is invoked by the main () method.

Ways to Create a Java Thread

The Thread class is one of the main classes in Java that is written as java.lang.Thread class. This class, along with the Runnable interface, is used to create threads in Java. A thread in Java can be created in either of the two following ways:

  • Extending the java.lang.Thread class

Or

  • Implementing the java.lang.Runnable Interface

Life Cycle of a Java Thread

A thread in a Java program goes through different states, as shown in the following flowchart:

Screen Shot 2016-01-22 at 8.54.54 PM

A brief description of these states is as follows:

  • New:  Refers to the state after an instance of the Thread class gets created but before the invocation of the start () method. This state marks the beginning of life cycle of a thread.
  • Runnable: Refers to the state where a thread executes the task for which it is created. The thread comes into this state after the start() method is invoked.
  • Running: Refers to the state where thread is selected by the scheduler for the execution purpose.
  • Not-Runnable (Blocked): Refers to the state where thread is alive; however it is not eligible to run.
  • Terminated: Refers to the state when a thread gets terminate and is in a dead state.

Familiarizing with Thread Priorities

We are aware of the availability of multiple Java threads in a program. But the question that arises is how a program knows which thread to run first or sequence of execution order for all threads. The solution to this can be given by scheduling thread execution. Each thread in a Java program has a priority. By default, priority of a thread is inherited by its parent thread and is in between numbers 1 to 10, where number 1 denotes that thread is at the minimum priority level and 10 denotes the highest priority level for a thread. Further, thread priority can also be changed according to the requirement using the java.lang.Thread.setPriority () method.

Learning Linux V7 Essentials

Thursday, 4 February, 2016

Learning Linux V7 Essentials

Introduction

Linux is an application enabling operating system that helps the user to access the devices on the computer system to perform various operations. In fact, it manages the communication between the computer’s hardware and software.

Features of Linux V7

Linux has now come up with additional features through its V7 version to enable scalability, reliability and security in its applications. This version has a very user friendly environment with improved operational efficiency. Features include

  • Linux Containers having enhanced application development, delivery, portability and isolation.
  • XLS as default file system having a scaling to 500 TB.
  • Application runtimes and development, troubleshooting tools which are container- ready and are more powerful and secure.
  • For modernizing the management services and security, an innovative infrastructure component system is provided.
  • Linux V7 has optimized performance and easy scalability as it comes with built in performance profiles, tuning and instrumentation.
  • In order to ensure a streamlined administration and system configuration, it is equipped with Unified management tooling and an industry-standard management framework with OpenLMI.
  • Enhanced application isolation and security to counter unintentional interference and malicious attacks.
  • In order to enable secure access for Microsoft Access Directory users, it is equipped with a cross-relam trust.

Advanced Features

  • Web Control- This is a new simpler front-end graphical interface. It configures and monitors one or more FAHClient slots through an easy to use webpage.
  • FAHControl – This is an optional advanced front-end graphical interface which configures and monitors one or more FAHClient slots on one or more computers.
  • FAH Slot- The FAH slot facilitate interconnectivity between GPUs and CPUs. Each slot can download, process and upload results independently.
  • FAHViewer- This is a newly modeled work unit viewer which offers various options like ball and stick, space fill, zoom, rotation etc. It also has snapshot capture and cycling feature.
  • FAHClient- This client software is a backend feature and runs behind the scene. It manages the work assignments for each and every client slots.

Learning basic Linux commands

One essential in order to master Linux V7 is to master the basic commands of Linux. Copy commands, disk duplicate commands and file commands are very much the same in all Linux versions.

Programming in Linux

In order to learn Linux and especially Linux V7, the user must be familiar with general programming basics. Linux distribution supports various programming languages like Ada, C, C++, Go and Fortran. Several programming languages like PHP, Perl, Ruby, Python, Java, Rust and Haskell support Linux. The Linux version includes specific purpose programming languages targeted in scripting, text processing and system configuration like shell script, awk, sed and make.