Monday, September 3, 2018

Blockchain - Towards a Decentralized Future?



I've signed up for a blockchain course through Lighthouse labs in Vancouver. This is my hope to up skill my programming languages for the future as well as get me involved in more technical work after being on course for a management role.



I'm chasing a few things by doing this. Part of me, the political scientist that I was in another life time, is excited for the move away from centralized controls of things like economics that are so frequently susceptible to corruption and authoritarianism. Blockchain may represent the best hope I've seen for the transparent, democratic principles that the Internet was founded on.



The technical side of me is excited to be on the cutting edge of technology again. The industry seems to be hyped for the possibilities inherent in the technology. I'm excited to see how we can implement this technology in a reliable, responsible manner that gives humanity more impact than just cryptocurrency.



While much has been said about the ecological impact of bitcoin mining, I'm excited about the prospect of using more humanitarian computations for driving the blockchain that would support both the platform and provide a tangible good for the world.



My background as a SET means I'm interested in testing and security as well and I hope to expand upon this during the course. Ethereum's reputation of frequently releasing breaking changes that consumers are surprised by (at least, from the blockchain developers I've spoken with) raises a number of interesting questions about the brittleness of its interface that is meant to be the foundation of contracts between parties.


I've spent some time doing research into some security aspects as well and have been somewhat difficult to wrap my mind around. My previous work in identity management compounds my interests in how actors can maintain trust in an anarchic system and I look forward to expanding upon this. I also interested in what safety measures have been built into the blockchain framework to support this and what checks are put in place to to uphold this trust.

Finally, somewhat sheepishly I'll admit that I jumped on the cryptokitties bandwagon earlier this year. After seeing it spike in value and absurd amounts of money moving around I watched the entire system come to a grinding halt because of the volume of digital cats being traded. A system designed to be the world's computer with the redundancy of a decentralized system being brought down by digital cats seems absurd but a reality I was faced with. Investigating how to avoid similar problems where a success became a failure due to popularity was a problem I faced during my time in the video game industry and I'm interested to see if there are any learning that can cross-pollinate.






I'm already thinking up ideas for my final project. I see central authorities in a lot of my day to day and like the monopoly-busters of old, I'm looking forward to upending the system.

Thursday, January 3, 2013

A few tricks for better build pipelines


  1. Monitor fails - each failed build needs to be logged and have a quick post mortem. Identify which test failed and why. If not why, track the test over time and look for patterns. Lots of teams just say, "the test is flaky" and move on. 
  2. Run failed tests first. When there is a fail (and assuming it was logged in step 1), the pipleline can be set to pick up that test first. If it fails, then the regression is still present and the pipeline should run red again.
  3. Prioritize test order - have the business critical tests run in an earlier stage than others so devs get faster feedback when they've screwed something. 
  4. Short feedback cycles. Rule of thumb I use is to try and keep it under 10 mins. When you breach this, it's time to talk about parallelization / performance testing / magic wanding.

Monday, November 26, 2012

Automation feedback - when to think about parallelization

I've designed a few automated testing systems now. A frequent anti-patten I've seen is using UI automation tools such as selenium to drive acceptance tests. One of the big pains is that as a project grows this system becomes more and more cumbersome, requires a lot of maitenance (to the point the cost of keeping them is questionable) and slows down feedback. The main goals to aim for are:

  1.  Feasible to implement in terms of development skills
  2.  Low cost in terms of time and maitenenace effort
  3.  Effective and timely feedback


There are a couple of options I've come across

  • Run a nightly build of UI tests (feedback once every 24 hours.)
    • I tried this on a project for Google. It can work with small teams with limited checkins (~10 per day) where functionality is well defined and it is easy to blame revision numbers
  • Parallelization - Ideally, each test runs concurrently on its own environment. 
    • Problems I've faced:
      • Can be dependant on Hardware if VMs aren't an option
      • Usually BS trying to get corporate IT departments to give you what you need
      • May miss bugs that would occur naturally when 2 users are concurrently using system that would be caught by running them on the same environment
  • Continuous run
    • The test suite just constantly builds and run tests
      • Requires an extremely robust test suite or very basic site
      • Can make debugging difficult because tracking the run against the build is very rapid
  • Test prioritization
    • Label tests as core and have them run very frequently (every checkin)
    • Run tests labelled regression in a seperate pipeline
      • This is usually very feasible
What do I recommend? Don't run acceptance tests at all through the UI. Use a proper domain model so it runs as a headless check and have general smoke tests. However, js heavy websites where business logic is happening a in the user interface means this isnt always feasible. I'd probably choose parallelization since it strikes well in terms of feasibility, cost and feedback.

Thursday, November 8, 2012

Agile Team Health Tips


A lot of teams get sick working in enclosed areas. Here are a few tips I've started to promote to keep a team up

  • If the team is testing against mobile, make sure the phones are wiped down every day
  • If someone feels sick, quarantine them immediately. Work from home for a day and see if its an infection.
  • pair with seperate keyboards / mice
  • Avoid eating while you work
  • Sustainable pace, don't be overworked and have your immune system down

Tuesday, October 30, 2012

Acceptance testing through the UI layer

This is a subject I have been thinking about for a few weeks now. On a recent project I was working on  we ran an automated test suite that handled all acceptance tests through specflow + webdriver. I'm not sure I would attempt this again but find myself inheriting a system that sounds like it is based on a lot of the same principles. Here are a few challenges I faced that I hope to avoid:

  • Asynchronous integration with external systems
    • In my previous case, we had an external system that outputted emails that needed to be checked for content. This could take between 1 second and 3 mins to complete and was of extremely high business value. 
    • Solved by having an AssertWithRetry that would check every every 500 ms and sleep until it timed out
    • Problem: Delayed feedback cycle. Many test runs had multiple waits and could delay the test suite
  • Flakiness of Selenium due to Ajax + javascript errors
    • Lead to perception that tests could not be trusted by dev team
    • "Flaky test" vs. "Legitimate bug" fights with devs. 
    • Frequent quarantining of tests without enough investigation. Client + Devs unwilling to accept time it took for debugging.
    • Solved with re-run
  • Edge Case testing through UI
    • Can be solved with high level of parallelization. Not always an option
  • Zero Failed test before check-in
  • Redundancy in Specflow Steps
  • Test IE first vs. Firefox?

Monday, October 29, 2012

QA is like...

I was recently trying to explain my job without making it seem to technical. I was trying to come up with a simple analogy, and this one got some traction. Context: I was discussing it with someone I knew from UBC who I had done my arts degree in.

So it goes like this. Writing software is like writing a book. Lots of writers can vomit on a page and try to sell it. Good writers have a team to support them. In my case, I act as a proof reader. I take a sober second look and point what is obviously wrong, makes no sense or can be reworked. We have the expectation that the writer has used spell check (like unit tests) and they have written complete sentences. We hand back notes on what has to change and what might need a second look. The publisher (business) can push it out the door if they really want.

Or something.