Friday, February 07, 2014

Difference between view source and view element in chrome

Introduction

Right-Clicking on a page in Google Chrome shows up a context menu. This context menu has two options -

  1. View Page Source
  2. Inspect Element

The Difference between the two

I had always assumed that these are same html "code", but they are no. View Page Source gives you the raw page source. Inspect Element gives you the rendered code. That's a key difference! 
Consider the case of an JS script that updates the DOM dynamically. Looking at the page source, we only see the script code or the link to it. Inspect element however shows the updated DOM! So, it's an easy and simple way to understand your updated DOM. 

Sunday, January 26, 2014

Test Automation requires constant maintenance

Introduction

We spent the greater part of a decade working on a Test Automation system. That time spent was one of great productivity and really enjoyable. However, the question is what should a test automation focus on keeps coming back. Should one write a system that incorporates test reporting, scheduling, simple UI etc.. or should one write test cases that are effective? Obviously everyone wants test cases but there is always pressure to make the reports look 'nice', make it simple so that even the CEO's assistant can run it!

Test Automation in TCL/TK

The first system we wrote was in TCL/TK and used XML to configure different test suites, tests, and parameters. Using TCL/Expect, it automated interactions with a SoC on a board via a serial interface. It was designed to test Linux and RTOS network drivers. However, the interfaces kept changing and it really didn't do what we wanted to. The only place it made sense was the performance and long duration tests where it was hooked up to an IXIA network simulator and controlled network traffic.

Test Automation in Ruby

The second system was written simply because a PHB at the top of the corporate ladder decided that we can get better 'efficiency' if all the test automation teams worked together to write a comprehensive one-shoe-fits-all system. Our counterparts had used Ruby and it seemed pretty cool option to try. This system needed work for testing RTOS'es, Linux drivers, Multimedia frameworks, Compilers, SDKs and the Kitchen Sink. After all, the PHBs thought, testing is a single domain and everything else is incidental. The big problem was that we needed to talk to teams the other side of the world everyday to do anything useful. The project management was a nightmare and we ended writing a lite version of the software which is still being used.

Costs

Looking back I would say we spent the following
  1. 4 resources worked full time for 10 years on test automation with us. That's 4800 days spent on building, maintaining and enhancing an internal tool.
  2. Effort spent was 9600 person days on an average. There were times when we had more people working on the project but really that doesn't matter.
  3. Assuming that each resource was billed $4000 per month that works out to $20 per hour. A low figure I picked on purpose so that we are conservative in our cost estimate.
  4. It works out $192000 at a minimum. If we are being realistic, then $100 per hour is probably the right ballpark number. 
So, I would estimate that its atleast $960000 spent on test automation frameworks. This is a cost separate from actual test case implementation, test execution, project management, managing expectations, vendor management, outsourcing management etc..

Summary

My advice - don't spend too much test frameworks, test automation systems. Instead focus on the actual tests. Focus on automating your tests with simple scripts and don't worry too much about tying them all together. Is it better to have a beautiful system with few tests, great reporting, fantastic UI OR a large bank of tests, crummy reports and command-line runners? The latter gives you better bang for the buck for sure. End of the day, XKCD sums it up neatly.

Sunday, January 05, 2014

Google Apps Marketplace

Introduction

Google Apps has a marketplace which hosts apps (not to be confused by Google App Engine). That means you can develop applications that sit on top of Google Apps like Gmail, Docs etc.. The real implication of this is that it converts Google Apps to a platform for development, opening the doors for interesting applications. Everything from Business apps to simple ToDo lists are now on Chrome and can be tied to Google Drive.

Development

The starting point for development on Google Apps is https://developers.google.com/google-apps/. The only weird thing is the $5 upfront payment in order to start developing. After that, its similar to developing a Chrome extension or an App.

Wednesday, December 04, 2013

ultrahaptics for handsfree feedback

Introduction

Ultrahaptics is the name that a team in Bristol University is calling their touchless feedback system. Haptics is the term used to describe tactile feedback like a vibrating joystick or a force feedback joystick. In the case of ultrahaptics, the feedback is via focussing audio waves to a point. From what I understood, they do this by focusing sound waves using a phased transducer array.

This makes it really cool because now you get feedback without having to touch something.

Applications of Ultrahaptics

I can think of several applications for this kind of technology, the first of which is definitely console games. Imagine hooking this up with Kinect! So, here's my list of places where I would like to see this.
  1. Mashup with Kinect.
  2. Digital Signs and Kiosks. Right now the only way is to the touch the screen.
  3. Phones and Tablets. I'm not fully sure of this one, but maybe.

Linux Device Drivers for Ultrahaptics

I don't know if there are any Linux drivers written for this. From the video, it looks like the ultrahaptics array is hooked up to a Mac. If this makes the mainstream, then definitely someone will sit down and write drivers for it. There are several things they will need to take care of There are USB Joystick drivers today and they provide both control and feedback. This would be something similar. However, the userspace will also need to keep up. evdev would need to provide event notification for applications and they will need to use some ioctl to control the 'feel' of the 'screen'. All in all very interesting.

Friday, November 29, 2013

How to build and setup LLVM Scan Analyzer for Linaro Toolchain

Downloading and Installing Clang-Analyzer

Download LLVM, Clang, Clang-tools-extra and Compiler-RT from - http://llvm.org/releases/download.html.

  1. Extract the LLVM tarball. I'm going to use the variable LLVM to point to this directory going forward.
  2. Go to $LLVM/tools/src. Extract the clang release tarball. Rename the extracted directory as clang
  3. Go to $LLVM/tools/src/clang/tools/. Extract the clang-tools-extra and rename it as extra.
  4. Go to $LLVM/projects/. Extract the compiler-rt tarball and rename it as compiler-rt.
Note! - If you don't extract the clang, clang-tools-extra and compiler-rt in the respectively directories, it will not build them by default.

For more information look at http://clang.llvm.org/get_started.html. The instructions are for svn but they apply for the release tarballs as well.

Building and Installing Clang

Go to $LLVM directory and then you can configure, build and install clang.
 $ ./configure $ make $ make install
If you want to install clang in a specific directory then pass --prefix= to configure. Usually, I use /proj/staging/ to stage my builds and so, I usually build as ./configure --prefix=/proj/staging/llvm

Note! make install will not install scan-build and scan-view. They remain in $LLVM/tools/clang/tools/scan-build and $LLVM/tools/clang/tools/scan-view. So you will need to add these directories into your path.

Downloading and Installing Linaro Toolchain

Download Linaro Toolchain from - https://launchpad.net/linaro-toolchain-binaries/. Extract it to a directory of choice. I usually place it in /proj/staging/linaro-gcc. Next add the /proj/staging/linaro-gcc/bin directory to your PATH variable.

Running clang-analyzer with linaro toolchain

Now follow usual steps to cross compile your program. To enable scan-build make sure you run make via scan-build as shown in my previous post on running clang-analyzer.

Monday, November 18, 2013

Running a custom PHP application on the Bitnami Wordpress stack

Being an embedded software engineer, I usually go a bit spaced out when reading about things like wordpress, bitnami stacks etc.. I have dabbled on making small intranet sites with Ruby on Rails but have never looked at PHP.

Today, I downloaded a wordpress installer from bitnami and 10 minutes later I had a small site running on my laptop. I had a look at the various plugins and installed a couple as well.

Intrigued, I wanted to spend a little time learning PHP. So, I created a custom app by following the instructions from Bitnami's Wiki topic and started reading the tutorial from w3cschools. So far PHP doesn't seem very complex and I'm being to understand why it's so popular.

Steps to create a custom app


  1. Create a new folder in the apps directory, say 'test'.
  2. Copy over the conf folder from the wordpress directory into the apps/test folder. We will need to edit the configurations inside them
  3. Search and replace 'wordpress' with 'test'. Typically, these are directory paths
  4. Create a htdocs folder in apps/test. This will hold our application. For now create an index.php and echo a 'Hello World'.
  5. Now, we need to edit the apache server configuration files so that they can find the app. Bitnami makes it quite simple by keeping a configuration file in apache2/conf/bitnami with the name bitnami-apps-prefix.conf. Edit this file and copy the line pointing to wordpress directory to a new line and change from 'wordpress' to 'test'.
  6. Restart Apache and that's it!

Thursday, March 21, 2013

Use git commits to understand code design

http://gitster.livejournal.com/30195.html has an excellent point about using --grep and other options such as --author --since to grep through commits to find information. One really good thing about Linux developers is the quality of the commits - very high.

I was reading up about omapdrm and wanted to find more information on GEM - Graphics Execution Manager. Now, most of the code for omapdrm has come from Rob and so, the easiest thing to understand about GEM is to run 'git log --grep=GEM --author=Rob' and boom! you get a design document (almost!). Add -p to the git log command and you get the code changes done as well.

On similar lines, reading code differences is a must. The patch format isn't really good for my brain, and I prefer seeing code differences side by side. The tool - Emacs of course, esp. ediff. Just visit any file and do a M-x ediff-revision. When choosing the revisions, you can pick anything that git rev-parse understands. This tip is brought to you by http://blog.endpoint.com/2009/03/emacs-tip-of-day-ediff-revision.html. A real life saver because my older flow used running vc-log and then visting a revision and finally running ediff-buffers. ediff-revision is much better.