Effective and Open Source Test Automation Tool: Selenium

Selenium is a portable JavaScript framework commonly used for testing web applications. It is built by Thoughtworks as an open source browser based test framework. Commonly used for testing of web applications, Selenium strongly supports Firefox but is also compatible with IE 6/7, Opera and Safari 2.0+

Selenium is developed using JavaScript & HTML. The test can be written as HTML tables or coded in number of popular programming languages like Java, Ruby and Python etc. Selenium is available on Windows, Linux and Macintosh.

Selenium was invented by Jason R. Huggins and his team. Selenium was originally known as JavaScript Functional Tester (JSFT).

Some features of Selenium include:

  1. Auto complete for all common Selenium Commands
  2. Easy record and Playback
  3. Walk through tests
  4. Debug & Set breakpoints
  5. Save test as HTML, Ruby Scripts or any other format
  6. Supports for Selenium user-extension .js file

Components of Selenium:

There are three major components of Selenium; each one has a specific role in aiding the development of web application test automation. The components are as follows,

  1. Selenium IDE (Integrated development Environment)
  2. Selenium RC (Remote Control)
  3. Selenium Grid


Selenium IDE is used for building the selenium test cases. It operates as a Firefox add-on and provides an easy-to-use interface for developing and running individual test cases or entire test suites.


Selenium RC supports various programming languages like HTML, Java, C#, Perl, PHP, Python and Ruby. It allows the tester to use these programming languages for maximum flexibility and extensibility in developing test logic.


It supports selenium RC to run in multiple environments. These test suites can be run on multiple operating systems and browsers simultaneously.

Posted in Quality Assurance & Testing | Tagged , , , , , | Leave a comment

How to install Xen 4.1 on CentOS 5 default kernel?

What is Xen ?

Xen is a bare metal Hypervisor which is able to run different and multiple operating systems on a single physical machine. The concept of running an operating system over another operating system is known as Virtualization.

Xen is an enterprise level production ready Virtualization Hypervisor which is able to run OS’s like Windows, RHEL, CentOS, BSD and OpenSUSE.

Xen is called HyperVisor because Xen itself is a special kernel which starts the base OS. The base OS in Xen Virtualization is usually called as Dom-0 and the Virtual Machines which run on Xen Hypervisor are known as Dom-U.

Running Mac OS on Xen ?

Running Mac OS on xen is not possible as Xen does not emulate the hardware. Unlike Xen, VmWare can emulate hardware and so it is able to run Mac OS on it.

Steps to install xen 4.1 on CentOS
You will need to have a CentOS with GUI installed ( KDE / GNOME )

# yum groupinstall "X Window System" "KDE (K Desktop Environment)" # wget -O
/etc/yum.repos.d/GitCO.repo "http://www.gitco.de/repo/GITCO-XEN4.1.0_x86_64.repo"
# yum install xen.x86_64 Use pic=nomsi iommu=off as kernel and module parameters.
Above step is very crucial as the reboot in Xen 4.1 might hang your system later.
# yum install virt-manager

You are now ready with Xen 4.1 + CentOS and with help of virt-manager you can create new VMs.

Posted in Xen | Tagged , , , | Leave a comment

System variables used in Perl

Perl is a powerful, adaptable and dynamic programming language which is compiled each time before running. It was first developed by Larry Wall in late 1980s. Today we will discuss briefly the system variables used in Perl.

The following are system variables used in Perl:

1. $$
This variable returns the process ID of the process that is running the current Perl script. It is very handy where the operations are to be performed using the script’s process ID.

2. $<
This variable returns the user ID of the current process.

3. $>
This variable returns the effective user ID of the current process.

Note: The difference between current and effective user is that effective user is running with the privileges of that user id. Basically the real is the user id of the user and current is the process that started you.

4. $(
This variable holds the real group ID of the current process.

5. $)
This variable holds the effective group ID of the current process.

6. $0(zero)
This variable holds the name of the file the current Perl process is executing.

7. $^X
This variable stores the interpreter path or the binary of Perl that got executed.

8.  $]
This variable is very helpful to get the version number of the perl interpreter. This helps to trace the version compatibility issues that occur with various modules.

9.  $^O
The name of the operating system the Perl code was built on. This is similar to $Config{‘osname’}.

10.  $^T
This variable returns the time at which the script was executed. This time is in seconds since the beginning of 1970.

11.  $^W
Returns the value of the warning switch. It is either TRUE or FALSE. Warnings is a Perl program to control additional Perl warnings. The syntax to enable a warning is to “use warnings” and to disable them is “no warnings”.

12.  %ENV
This special hash variable maintains all the environment variables. Basically environment variables are used to store the information that is used to run the system/computer. One can change the values of these variables but these changes are available within the program. Once the program exits, these changes loose their respective scope. Every Perl process is provided with a copy of these environmental variables.
One can print out all these environment variables using:

foreach (keys(%ENV)) {
print “Key:$_ and its value: $ENV{$_}\n”;

13.  %SIG
This is also a special hash variable that can be used to set signal handlers of various signals. This makes signal handling quite easy as one can set a signal handler to a certain value and again reset its value to default.

Similar to %ENV, %SIG can also be printed using:

foreach (keys(%SIG)) {
print “$_”;

My system provides the following list of signals:


14.  @INC

This special array is similar to shell’s PATH variable. Although, PATH contains a list of directories that are to be searched for executables, @INC contains a list of directories from which Perl modules and libraries can be loaded.

When we code use()” or require() or do(), Perl searches the list of directories from @INC and searches the requested file in this array. If the file is not present then one has to define it separately.

15.  %INC
This is special hash that is used to store the names of the files and modules that were successfully loaded and compiled for a given Perl script. Before loading and compiling any file or module Perl initially checks for its availability in this special hash and thus if not found loads and compiles it accordingly.

Posted in Perl | Tagged , , , | Leave a comment

How to Hide/Encrypt HTML Source Code

Developers are increasingly concerned for the security of their websites. Hence most of the developers want to hide their HTML code.

Though it is impossible to hide HTML source code, you can make it unreadable by simply using encryption and decryption techniques or disable mouse right click using JavaScript.

Let’s see Right Click Disable Method with a random code as follows:

<SCRIPT TYPE=”text/javascript”>
var msg=”You cannot use right click on this page.”;
function clickIE() {if (document.all) {( msg);return false;}}
function clickNS(e) {if
(document.layers||(document.getElementById&&!document.all)) {
if (e.which==2||e.which==3) {( msg);return false;}}}
if (document.layers)
document.oncontextmenu=new Function(“return false”)

Though this code helps you to disable right click of mouse on your page but if JavaScript is disabled on client’s browser, the right click of the mouse is enabled by default. Hence we need to take adequate security measures.

In next method, we will discuss encryption and decryption techniques in JavaScript with keys.

Following is the example with encryption and decryption of HTML source code in ASP:

<%@ Language=VBScript %>
<SCRIPT language=”javascript” runat=”server”>
function encodestr(s,k) {
var sl=s.length;
var kl=k.length;
for(encodestr =”,i=0; i<sl; i++) {
var encodedChar=s.charCodeAt(i)^k.charCodeAt(i%kl);
encodestr += String.fromCharCode((encodedChar & 0x0F) + 97) + String.fromCharCode((encodedChar >> 4) + 97);
return encodestr;

Above JavaScript is run at server side called “encodestr” that has two parameters namely “s” which is the string to be encoded and “k” is the key for encoding.

set objHttp = Server.CreateObject(“Msxml2.ServerXMLHTTP”)
objHttp.open “GET”, “http://www.google.com”, false
objHttp.setRequestHeader “Content-type”, “application/x-www-form-urlencoded”
resp = objHttp.responseText

Above code is the ASP code which collects “http://www.google.com” HTML format code and stores in “resp” variable in string format which is further encoded using “encodestr” and stored again in same variable.

<SCRIPT type=”text/javascript”>
function decodestr(s,k) {
var sl=s.length;
var kl=k.length;
for(decodestr =”,i=0, j=0; i<sl; i+=2, j++) {
decodestr += String.fromCharCode(((s.charCodeAt(i) – 97) + ((s.charCodeAt(i+1) – 97) << 4))^k.charCodeAt(j%kl));
return decodestr;

Above mentioned JavaScript is run at the client side. The function “decodestr” also has two parameters, ”s” which is the string to be decoded and “k” is the key for decoding which must be same as used for encoding.

<SCRIPT type=”text/javascript”>
document.write(<%= “decodestr(“”"&resp&”"”,”"key”")”%>);

The script syntax mentioned above sends the encoded string to “decodestr” and return string is written by document.write.

By using the above method you will get HTML source code in encrypted format which is not easily readable. You can use both of the above techniques for better security.

You can use your own technique for encryption and decryption but be sure that encryption code is not visible at client’s browser. For this you can use “runtat=’server’” attribute in your JavaScript function.


Posted in Security | Tagged , , , , , , , | Leave a comment

Delete/Truncate data from all Tables in SQL Server

We usually use a dummy database for testing. It gets filled with a lot of unwanted temporary data. At the time of deployment we replicate this database into Live Database. While doing so we need to delete all data from each table of this database. It can be easily done if DELETE or TRUNCATE table doesn’t has a foreign key relationship. In this case, first of all we have to disable all CHECKs and TRIGGERs for that table and then call DELETE or TRUNCATE syntax. As this is a tedious and hectic task, MS SQL SERVER provides a function called sp_MSForEachTable, also called as undocumented sp_MSforeachtable procedure. This procedure is popularly used for performing the action on all the tables within the database.

Let’s see how to use this function to Empty your database:


On executing the above code,  data from all tables within the database gets deleted. For Truncating data from all tables we need to follow the steps explained below:

Create Procedure dbo.sp_TruncateAllTables (@ResetFlag Bit) As
Declare @SQLQUERY VarChar(500) Declare @TableName VarChar(255) Declare @ConstraintName VarChar(500) Declare AllForeignKeys SCROLL CurSor For Select Table_Name,Constraint_Name From Information_Schema.Table_Constraints Where Constraint_Type=’FOREIGN KEY’ Open AllForeignKeys Fetch Next From AllForeignKeys INTO @TableName,@ConstraintName
Set @SQLQUERY = ‘ALTER TABLE ‘ + @TableName + ‘ NOCHECK CONSTRAINT ‘ + @ConstraintName Execute(@SQLQUERY)
Fetch Next From AllForeignKeys INTO @TableName,@ConstraintName
Declare curAllTables Cursor For Select Table_Name From Information_Schema.Tables Where TABLE_TYPE=’BASE TABLE’
Open curAllTables
Fetch Next From curAllTables INTO @TableName
Set @SQLQUERY = ‘DELETE FROM ‘ + @TableName
If @ResetFlag = 1 AND OBJECTPROPERTY (OBJECT_ID(@TableName),’TableHasIdentity’)=1 Set @SQLQUERY = @SQLQUERY + ‘; DBCC CHECKIDENT(”’ + @TableName + ”’,RESEED,0)’
Fetch Next From curAllTables INTO @TableName
Fetch First From AllForeignKeys INTO @TableName,@ConstraintName
Set @SQLQUERY = ‘ALTER TABLE ‘ + @TableName + ‘ CHECK CONSTRAINT ‘ + @ConstraintName
Fetch Next From AllForeignKeys INTO @TableName,@ConstraintName
Close curAllTables
Deallocate curAllTables
Close AllForeignKeys
Deallocate AllForeignKeys

In the above procedure we need to pass a bit value (i.e. 0 or 1) which is used to know whether it is called for DELETE or TRUNCATE. 0 stands for DELETE and 1 stands for TRUNCATE. According to this value, procedure resets the SEED value of identity fields while calling TRUNCATE command.  In above procedure we collect the information about the CONSTRAINT of each table from Information_Schema.Table_Constraints. If table contains  ’TableHasIdentity’ and we execute the procedure for truncate then it resets the identity field by executing another extra syntax DBCC CHECKIDENT(”’ + @TableName + ”’,RESEED,0)’ otherwise it simply deletes data from the table.

Thus knowing the advantages of DELETE over TRUNCATE, most developers prefer to use DELETE.

Posted in Database, MySQL | Tagged , , , | Leave a comment

A brief about MongoDB

Success of any application depends on database scheme. More normalized form of database leads to faster performance of application. To design database schema with normalization is not a piece of cake. Sometimes programmers need to change code according to change in requirement which changes database schema. Hence best way to map code or documents to database is Documented Oriented Database.

Document oriented database is schema- less database. One of the most popular document oriented database is Mongo Database. Mongo database is same as relational database except that it stores data in files as documents. Documents contain data such as array, arrays contain key value pairs, value can be data such as integer, string, etc.

Why MongoDB?


  • Documents (objects) map nicely to programming language data types.
  • Embedded documents and arrays reduce need for joins.
  • Dynamically typed (schema-less) for easy schema evolution.
  • No joins and no multi-document transactions for high performance and easy scalability.

High performance

  • No joins and embedding makes reads and writes fast.
  • Indexes include indexing of keys from embedded documents and arrays.
  • Optional asynchronous writes.
  • High availability.
  • Replicated servers with automatic master failover.
  • Easy scalability.
  • Automatic sharding (auto-partitioning of data across servers).
  • Reads and writes are distributed over shards.
  • No joins or multi-document transactions make distributed queries easy and fast.
  • Eventually consistent  reads can be distributed over replicated servers
  • Rich query language.

Availability of a lot of useful features like embedded docs for speed, manageability, agile development with schema-less databases, easier horizontal scalability because joins aren’t as important.

Large MongoDB deployment:

1. One or more shards, each shard holds a portion of the total data (managed automatically). Reads and writes are automatically routed to the appropriate shards. Each shard is backed by a replica set which just holds the data for that shard.

A replica set is one or more servers, each holding copies of the same data. At any given time one is primary and the rest is secondary. If the primary goes down one of the secondary takes over automatically as primary. All writes and consistent reads go to the primary, and all eventually consistent reads are distributed amongst the secondary.

2. Multiple config servers, each one holds a copy of the meta data indicating which data lives on which shard.

3. Each router may act as a server for one or more clients. Clients issue queries/updates to a router and the router routes them to the appropriate shard with the help of config servers.

4. Each client is a part of the user’s application and issues commands to a router via the mongo client library (or driver) for its language.

mongod is the server program (data or config). mongos is the router program.

Mongo data model consists of:

  • A Mongo system (see deployment above) holds a set of databases.
  • A database holds a set of collections.
  • A collection holds a set of documents.
  • A field is a key-value pair.
  • A key is a name (string).
  • A value is a basic type like string, integer, float, timestamp, binary, a document, or an array of values.

Mongo query language:

To retrieve certain documents from a db collection, you fire a query document containing the fields that the desired documents should match. For example, {name: {first: ‘John’, last: ‘Doe’}} will match all documents in the collection with name of John Doe. Likewise, {name.last: ‘Doe’} will match all documents with last name of Doe. Also, {name.last: /^D/} will match all documents with last name starting with ‘D’ (regular expression match).

Queries will also match inside embedded arrays. For example, {keywords: ‘storage’} will match all documents with ‘storage’ in its keywords array. Likewise, {keywords: {$in: ['storage', 'DBMS']}} will match all documents with ‘storage’ or ‘DBMS’ in its keywords array.

If you have lots of documents in a collection and you want to make a query fast then build an index for that query. For example, ensureIndex({name.last: 1}) or ensureIndex({keywords: 1}). Note, indexes occupy space and slow down updates a bit, so use them only when the tradeoff is worth it.


Posted in Database | Tagged , , , | Leave a comment

Build Your Private Chromium OS Chromebook

  • Introduction To Google Chromium OS

Google Chromium OS is a Linux based open source operating system developed by Google, specifically designed to run web applications. It was launched on July 7, 2009. The downloaded source code can only be compiled and run on hardware supported and approved by Google and its fellow companies.

There is an official site provided by Google to run a compatibility check of the hardware you intend to install Chromium OS on. The only application that runs on your desktop is the browser with a media player, keeping this in mind Chromium OS was developed targeting the users who spend most of their time on the internet.

Moreover Chromium OS is built using Portage from Gentoo with a specific overlay called the Chromium OS portage overlay. It is programmed in C and C++ with supported platforms of x86 and ARM families. The updates are also automatically installed on the client machine.

  • OS Architecture

Chromium OS open source project is based on a three-tier architecture: firmware, browser and window manager, system-level and other user related software and services. The firmware speeds up the boot time by not querying hardware for the floppy disks that are no longer used. The Linux kernel is included in the system level software.

  • Security

Google has taken utmost care to keep it highly secure and provided with a more reinforced OS with proper auto update that will provide a timely check of the operating system status. Moreover since the OS is open source any flaws in the OS can be known by the feedback received from the users which can then be rectified. The boot code stored in ROM (Read Only Memory) checks for any vulnerability which makes it one of the most secure operating system as announced by Google sources.

  • Linux shell access

Google has developed a Chromium shell known as “Crosh” from which users can ping and ssh. Bash commands are not provided but can be accessed via the crosh shell.

  • Remote access

The technology is known as “Remoting” that was developed by Gary Kacmarcik which is similar to Microsoft’s RDP, but is yet to be commissioned.

  • Media

A media player is also integrated to address the client’s music and other media needs. This will allow users to play MP3 files, view JPEG images and maintain other multimedia files.

  • Hardware support

Google mainly built this OS for secondary devices such as netbooks as it is not very effective for primary PCs. Moreover Google has recommended SSD (Solid State Drives) for faster IO’s and thus increase performance but it also supports HDD (Hard Disk Drive). Thus using of SSD boosts the performance as well as assists in reducing the space requirements to very large extent.

Now that we have a basic idea about Chromium OS let us understand the steps to configure our own ChromeBook.

  • A. Getting the correct NetBook

Chromium OS runs smoothly on a netbook that has a x86 or an ATOM processor. Though add-on devices such as Bluetooth and WiFi interfaced and running on the OS require some troubleshooting from your end.

  • B. Download and Install

A 2GB pen drive will be sufficient to burn the downloaded Chromium OS. You can download a live CD or a USB Disk Image from the official Chromium OS site. If you are familiar with Linux you can completely build and install it from the source code. It will just take about an hour to compile the source code though downloading and installing the OS is the safest and easiest option. The root user and the password are also provided on the same site.

  • C. Boot UP

Once you have downloaded, you can either burn it on USB or a CD. After this, all you have to do is boot your machine from the respective device. After a few seconds you will be asked to select a network. Once a proper ethernet or WiFi network is detected you will be greeted with the welcome message as all other OS do. Here you will have to login with your details. If Chromium OS detects a web cam on your system, it will prompt to take a photo of yours in order to build your avatar. You can skip this step if and upload the avatar later. Once you are through this step you will see a plain Google Chromium browser with great GUI and you are ready to go online.

  • D. Installing on a hard drive

Booting from a USB or CD Drive is always an inconvenient option as they are prone to get damaged or corrupted. Thus once you are satisfied with the performance and the user interface you can permanently install it on your hard disk. Moreover installing it on your HDD is quite easy, with a warning that all of the data on these disks will be completely flooded away.

All you have to do is to hit “Ctrl+Alt+T”. And now type “/usr/sbin/chromeos-install” and hit Enter(for other builds the command is “install”). Enter Root password ie: “facepunch” and if for a dell build use “dell1234″. Now follow some simple prompts. Once the installation is completed remove the pen drive or the CD drive and reboot your system. Congratulations you now own a brand new Chromium OS.

  • E: Conclusion

Note that you have installed a Chromium OS and not Chrome OS. The major difference between them is that Google Chrome OS is the official build which will be released and supported by Google developers which makes Chrome OS more stable. On the other hand chromium OS lacks with a few features such as a PDF plug-in, MP3 support, and print preview but still you have a working OS with cloud functionality.
Now, you can go to the Chrome Web Store to check for new apps and set-up Google Cloud Print and connect to Google Music. You can check the Chromium OS developer resources for more troubleshooting tips.


Posted in Google Chromium, Open Source | Tagged , , , | Leave a comment

How to Improve your Coding Skills

Today,  I would like to share with you a few useful tips on how to improve your coding skills:

1: Be clear about system needs:

Requirement gathering is the most important step of SDLC (Software Development Life Cycle). It is the first and foremost step and utmost importance should be given to it in order to achieve the required results. Improper requirement gathering leads to improper coding which in turn leads to end product which is not as per the requirement. Hence taking time to understand the system’s need will help further.

2: Plan accordingly:

It is necessary to chalk out a proper plan before you start coding. It is a good practice to make basic flowchart or an equation.
A big problem can be solved easily by breaking into parts. So try to adopt a modular approach as modules can be easily understandable and the quality can be easily controlled.

It is a common practice to start the coding process without a proper plan in place. This may lead to an increase in debugging time. Planning at preliminary stages saves time and effort.

3: Use standards in variable declaration:

Always try to follow standard conventions for defining variables. It helps in keeping track of variable type and also its purpose. It also makes the code more readable. It makes debugging and maintenance of the system simpler.

Use conventions for variables like add its data type as prefix. Example: For integer variables you can use: intUserId and for strings: strUserName. You can choose your conventions but make sure that it will be same for the entire project.

4: Organize your Code:

Always try to make proper indentation for the block of code especially for the conditional blocks like


It is better to use ‘tab’ instead of ‘spacebar’ for aligning lines of the code.
Try to put spaces between a variable name and an operator such as addition, subtraction, multiplication, division, etc. For example: variable = a + b;

It makes the code more readable and easily understandable and makes debugging easier.

5: Comment on your code:

Try to write a comment on your code as much as possible. This is a very good practice as it makes your code understandable. Commenting will help to understand the logic later or help someone else to understand the intention of your code. It is really very helpful for the new developer to understand the current logic and edit as per requirement and hence saves time.


Posted in Programming | Tagged , , , , , | Leave a comment

Data storage challenges and introduction to Unified Data Storage

Data Storage Challenges:

Meeting storage challenges requires in depth understanding of the need for more storage, finding out application performance issues and keeping a check on cost and technology. Thus managing traditional storage is a difficult task as they have performance and scalability issues, higher operational costs and complex management. Some of these challenges are:-

A. Massive Data Demand:

An industry survey estimates the digital world to increase by 35 zettabytes by 2020. Well one terabyte is equal to 1024 gigabytes, one petabyte is equal to 1024 terabytes, one exabyte is equal to 1024 petabytes, and one zettabyte is equal to 1024 exabytes (or in shorter terms 1 trillion gigabytes).

B. Performance Barrier:

Thus this rapid growth has caused a parallel increase in the size of databases. Thus their queries require faster response time from the underlying conventional storage. Whether it a social networking site, an enterprise database, a modern web application or a portal, all require faster disk access to read and write data. These are the applications whose performance not only depends on faster CPU cycles but also I/O latency and throughput.

Intel’s co-founder predicted in 1965 to double the processing power every two years is practically happening, thus there is also an immediate need for predictions and more importantly for their practical implementations for storage. With this not happening, these devices are divided into smaller units to cope up with the response time required by heavy applications.

C. Exorbitant costs and power consumption:

With daily increase in the storage demands, IT organizations and data centers are looking for larger storage with affordable cost. Unfortunately, good performance storage come with a huge price tag, maintenance costs and other licensing overheads. Also the other factors that add to costs like cooling equipment, space requirements and man power. According to a report storage consumes approx. 40 percent of any given data-center power.

What Is Unified Storage?

Thus day by day companies look for innovative and efficient storage solutions, one such innovative solution is “Unified Storage”. Unified storage is basically a combination of Network Attached Storage (NAS) and Storage Area Network (SAN), also known as network unified storage (NUS). This type of storage system makes it possible to run and handle files and applications from a single device. Its primary benefit is that it reduces hardware costs because it uses a combination of NAS and array of RAID disks in a single device. It supports fibre channel SAN, iSCSI.

Moreover management is quite simple as there is a single point of control instead of distributed or multiple points. This makes deploying of critical applications on a unified storage most convenient option for almost all organizations. For the relief of organizations the initial cost and maintenance of unified storage is the same as that of the conventional storage. Also the stability provided from these storage is more than any other storage. However, for consistent high performance and control dedicated servers are better.

In the next post we will discuss about Oracle Unified Storage and its benefits.

Posted in Storage | Tagged , , , , | Leave a comment

A brief introduction to Mobile Application Development

In the past couple of years we have witnessed tremendous growth in mobile users all over the world as the entry of smartphones in the market at affordable prices has triggered their usage. We have experienced a major shift in the way we access the internet today with mobiles becoming the primary access point for internet usage. In today’s fast paced world, phones are not just used for calling, playing games etc. but with smartphones we can schedule our complete day, check emails, make conference calls, connect using social network and a perform a host of other activities.

The growth of mobile phone market has generated a huge demand for various mobile applications. Numerous mobile phone applications are available that simplify various tasks for the users due to which we saw an accelerated growth of software/application development for mobile devices. Mobile application development is the course of action by which application software is designed and developed for hand-held devices like mobile phones, personal digital assistants, etc.

Earlier mobile developers faced many difficulties while writing applications as they had to build better, unique, competing and hybrid applications which would incorporate command tasks like messaging, contact list and calling in a user friendly manner. The launch of Android smartphones in the market brought a revolution in Mobile Application Development. If you have some basic knowledge about scripting and coding you can start building your own applications. Mobile Application Development was never as easy as it is now.

Every mobile phone vendors/proprietary platforms like Android, Apple iPhone OS, RIM Blackberry OS, Bada OS, Symbian OS provide their own SDK (Software Development Kit) to the developer. Developers can create applications using this kit. Developers are also provided a space/place/market where they can publish their creations to the world.

Considering the current market trends online businesses like web hosting, shopping, job portals, etc. have developed mobile applications for their clients to provide better services.  As per market research experts there were 8.2 billion mobile app downloads in 2010 globally and is expected to reach 76.9 billion in 2014 which will be worth US $35 billion.

Posted in Mobile Application Development | Tagged , , , , , | Leave a comment