Previously on Dr. Lambda's blog:
In a previous post I presented my newest pet project: Isabella. Isabella is a voice controlled personal assistant, like Siri, Alexa, and others. We have decided to investigate how difficult it is to make such a program. In the last post we made some small changes with big impact.
Now, the continuation...
Strengthening the foundation
As we are in a phase of improvement, this seems an appropriate time to make the foundation more solid: automated testing. On one hand I think I should have done this from the start, but on the other hand...
I am a big fan of Dan North, in particular his [Spike and Stabilize pattern]. With this we treat all code as if it was a Spike, code fast and unstable. Then deploy the feature to the users and see if it is used. Then after a certain amount of time – like a month – come back and see if it is being used. If it is mostly unused then delete it, otherwise if it is used a lot refactor it and write automated tests for it. This way you invest only very little time in code that ends up being unused.
Isabella started as just an experiment and not code I expected to be long lived, I opted for prioritizing new features over a stable code base. Just like stated in Spike and Stabilize. I recently changed my outlook for Isabella; I now expect that I will use (and work on) her for a long time. Said in another way, I have deployed the code, waited, and I now know that the code is being used. So, it is time to stabilize!
Stabilizing the code was an incredible frustrating process, for primarily two collaborating reasons. The first reason requires a bit of explanation. Many non-technical people think that programmers spent most of their time coding. In practice this is far from the truth. Normally with code we spent most of our time searching for and fixing bugs. This I can easily handle. As a child I loved mazes, now I enjoy being lost in a very complex system, fighting to find my way out. I savour the victory when I finally crack it.
During this process I have spent almost all my time searching the internet for answers; which libraries, syntax meaning, and just hard problems. Very boring, and time consuming. And when I wasn't search the internet I was rewriting the same code again and again. I didn't feel like I was making any progress at all. And then finally, even when something worked there was very little visible effect. So it also didn't feel like a victory.
Here is the documentation of my journey, including the problems and solutions I encountered.
Jasmine
My first instinct was: I want testing, I should start writing tests. I had a bit of experience with jasmine-node, so I installed it, and started writing tests. Problem was, because this was client code the way I used multiple files was with multiple script
tags. Thus jasmine-node couldn't find any of the dependencies. Adding import statements in all the files was an obvious solution, but that would give errors on the client side.
I did some research and found something called system.js, which would emulate import statements on the client side. This meant having to refer directly to the .js files instead of the .ts files. In spite of this it seemed like a neat solution.
Codeship, Bitbucket, and Git
My next idea was to setup an automatic test-and-deploy cycle. I wanted Heroku to run the app, [Codeship] to test it, and Bitbucket to host the code. So far I had just used Herokus as my code-host. I was faced with an entirely unfamiliar challenge. How to move from one git repo, to another?
I am no git guru. Unfortunately. I wish I could tell you exactly the steps I took to make this work, but I have no idea. I pull'ed one way, then the other, committed, merged, and suddenly I could push to Bitbucket. Codeship quickly picked up the push, and deployed it to Heroku. I'm skipping here a few small issues with some RSA keys for deploying from Codeship.
Gulp
If we think of problems as lianas, software development is like playing Tarzan; we are constantly swinging from one to the next, in a seemingly endless jungle. Usually when I worked on Isabella I would just start tsc -w
in the background, and forget about it. Sometimes I would forget to start the compiler which would be super annoying, because then I would push the build to the cloud to test it, and nothing would happen. This was fairly bad, but having added tests it was much more annoying. First, there were now two things to remember (or forget). Sure it also takes a bit longer to deploy, but having Codeship reject a deploy, because I forgot to test it locally was just a slap in the face.
It was time to setup a build tool. I did some research and narrowed the decision down to Gulp and Grunt. To me they seemed fairly equal, and I don't even remember what the deciding factor ended up being. I went with Gulp.
With a build tool the great advantage is that you can add as many post processing steps as you want. As encouraged by [Typescript documentation] I suddenly wanted browserify, and uglyfy. I also wanted it to be "watching", so I couldn't forget anything.
Uglyfy was no problem, watching was easy, browserify was... difficult. As mentioned earlier my test files used imports. In fact this had been quite tricky to achieve. Now it stood in my way, and I was not about to poke that bear. Therefore I abandoned my dream of browserify.
Do-over: Grunt!
As I remember it: I was brawling with an issue with Gulps jasmine-node not supporting the later versions of ecmascript – in particular Promises (which I use heavily). When suddenly I stumble on some blog describing my dream of a build pipe-line. It had everything, a client part, server part, and common part. The server part was tested using jasmine-node, the client part was tested with jasmine, and phantomjs. The client was browserify-ed and uglyfy-ed. There was watching, and the folder structure was beautiful. It was a [fine template] for a project like this.
The only problem was, it used Grunt. I'm not one to be over-confident in my decisions, so if I learn something new I gladly change. Thus I deleted everything I had made up till this point, and tried swapping in Grunt.
This was not problem free, but it wasn't too bad. I ended up testing both client and server with jasmine-node. Isabella is very light on dom, and very heavy on APIs, this I can test just as easily with jasmine-node.
Conclusion
Although this was a tough stretch I did accomplish a few things. My code is now a bit more secure from my students copying it, due to uglify. It also takes up less space, thus loads faster, also because of uglify. It is browserify-ed, so I can use import as much as I want, and I can never forget to include a file in the HTML. I have testing up and running, so now I can start adding tests whenever I add new features, or fix bugs in current ones. I have a guarded deploy so even if I forget to test locally I am guaranteed that the tests will be run before a deploy.
I don't have any general words of wisdom. I wont say that you should always just Grunt, or anything. Setting up a good pipe-line is hard, but it is also invaluable. It is also a problem that we don't tackle often. I am familiar with the DevOps saying: If it hurts, do it more. Encouraging practicing the skills that we struggle with. If you are afraid of deploying, do it more, so you minimize the risk. If you are afraid of changing some code, delete it and write it again, so you know whats going on. While I agree wholeheartedly with this advice, I don't feel like going through this process again any time soon. If you are about to setup a pipe-line of your own: I wish you the best of luck.