Monday, November 26, 2012

Automation feedback - when to think about parallelization

I've designed a few automated testing systems now. A frequent anti-patten I've seen is using UI automation tools such as selenium to drive acceptance tests. One of the big pains is that as a project grows this system becomes more and more cumbersome, requires a lot of maitenance (to the point the cost of keeping them is questionable) and slows down feedback. The main goals to aim for are:

  1.  Feasible to implement in terms of development skills
  2.  Low cost in terms of time and maitenenace effort
  3.  Effective and timely feedback


There are a couple of options I've come across

  • Run a nightly build of UI tests (feedback once every 24 hours.)
    • I tried this on a project for Google. It can work with small teams with limited checkins (~10 per day) where functionality is well defined and it is easy to blame revision numbers
  • Parallelization - Ideally, each test runs concurrently on its own environment. 
    • Problems I've faced:
      • Can be dependant on Hardware if VMs aren't an option
      • Usually BS trying to get corporate IT departments to give you what you need
      • May miss bugs that would occur naturally when 2 users are concurrently using system that would be caught by running them on the same environment
  • Continuous run
    • The test suite just constantly builds and run tests
      • Requires an extremely robust test suite or very basic site
      • Can make debugging difficult because tracking the run against the build is very rapid
  • Test prioritization
    • Label tests as core and have them run very frequently (every checkin)
    • Run tests labelled regression in a seperate pipeline
      • This is usually very feasible
What do I recommend? Don't run acceptance tests at all through the UI. Use a proper domain model so it runs as a headless check and have general smoke tests. However, js heavy websites where business logic is happening a in the user interface means this isnt always feasible. I'd probably choose parallelization since it strikes well in terms of feasibility, cost and feedback.

Thursday, November 8, 2012

Agile Team Health Tips


A lot of teams get sick working in enclosed areas. Here are a few tips I've started to promote to keep a team up

  • If the team is testing against mobile, make sure the phones are wiped down every day
  • If someone feels sick, quarantine them immediately. Work from home for a day and see if its an infection.
  • pair with seperate keyboards / mice
  • Avoid eating while you work
  • Sustainable pace, don't be overworked and have your immune system down

Tuesday, October 30, 2012

Acceptance testing through the UI layer

This is a subject I have been thinking about for a few weeks now. On a recent project I was working on  we ran an automated test suite that handled all acceptance tests through specflow + webdriver. I'm not sure I would attempt this again but find myself inheriting a system that sounds like it is based on a lot of the same principles. Here are a few challenges I faced that I hope to avoid:

  • Asynchronous integration with external systems
    • In my previous case, we had an external system that outputted emails that needed to be checked for content. This could take between 1 second and 3 mins to complete and was of extremely high business value. 
    • Solved by having an AssertWithRetry that would check every every 500 ms and sleep until it timed out
    • Problem: Delayed feedback cycle. Many test runs had multiple waits and could delay the test suite
  • Flakiness of Selenium due to Ajax + javascript errors
    • Lead to perception that tests could not be trusted by dev team
    • "Flaky test" vs. "Legitimate bug" fights with devs. 
    • Frequent quarantining of tests without enough investigation. Client + Devs unwilling to accept time it took for debugging.
    • Solved with re-run
  • Edge Case testing through UI
    • Can be solved with high level of parallelization. Not always an option
  • Zero Failed test before check-in
  • Redundancy in Specflow Steps
  • Test IE first vs. Firefox?

Monday, October 29, 2012

QA is like...

I was recently trying to explain my job without making it seem to technical. I was trying to come up with a simple analogy, and this one got some traction. Context: I was discussing it with someone I knew from UBC who I had done my arts degree in.

So it goes like this. Writing software is like writing a book. Lots of writers can vomit on a page and try to sell it. Good writers have a team to support them. In my case, I act as a proof reader. I take a sober second look and point what is obviously wrong, makes no sense or can be reworked. We have the expectation that the writer has used spell check (like unit tests) and they have written complete sentences. We hand back notes on what has to change and what might need a second look. The publisher (business) can push it out the door if they really want.

Or something.

Johannesburg

Arrived in Johannesburg. Am very tired but am attempting to push through and beat the jet lag quickly. Will be on job site tomorrow so will need to get up to speed quickly