I’ve spent the week locked in Makespace working on guitars, and thought I’d write up some notes on the things I’ve been working on to give insight into what goes into making guitars. You can see it here on the Electric Flapjack blog.
I’ll stop with the golang tips shortly, but another quick time saver incase you’ve not seen this before: you can use direnv to manage your GOPATH settings for each of your projects.
direnv is a small utility that will set/unset environmental variables as you enter/leave directories. It’s dead easy to set up, and in homebrew if you’re on a Mac. This means I can set a GOPATH specifically for each go project, without having to remember to do GOPATH=$PWD each time – direnv just sets it as a change directory to the project, and unsets it when I move away.
This can be useful for other things to, like setting PYTHONPATH or other project specific environmental variables.
Hat tip to Day Barr for alerting me to that one.
I wrote recently about my thoughts on golang, concluding that although far from perfect, I quite like the language as it makes solving a certain class of problem much easier than traditional methods.
One of the things I was a bit dismissive of was how it manages packages. Whilst I’m not a fan of its prescriptive nature, it’s out of the box behavior is in my mind just not compatible with delivering software repeatedly and reliably for production software. However, it’s fairly easy to work around this, I’ve not seen anyone use this particular approach, so I thought I’d document it for future people searching for a solution.
The problem is this: by default golang has a nice convenience feature that third party packages are referred to by their source location. For example, if I want to use GORM (a lightweight ORM for Go), which is hosted on github, I’ll include it in my program by writing:
And as a build stage I’ll need to fetch the package by running the following command:
go get -v github.com/jinzhu/gorm
This command does is checkout the package into your
$GOPATH/src directory at
$GOPATH/src/github.com/jinzhu/gorm, doing a git clone of whatever their latest master code is.
On one hand this is very nice: you build in how to find and fetch third party dependencies. However, it’s enforced two things that I don’t want when I’m trying to build production software:
- I now rely on a third party service being around at the time I build my software
- The go get command always fetches the latest version, so I can’t control what goes into my build
Both of these are not something I’m willing to accept in my production environment, where I want to know I can successfully build at any time, and I have full control over what goes into each build.
There is a feature of the golang build system you can use to solve this, just it’s not that obvious to newcomers, and this alone isn’t very useful, so here’s my solution, bsaed on the assumption you’re already using git for version control, and you have
$GOPATH pointed at your project’s root folder:
- Clone the project into your own code store repository. I always do this anyway, as you never know when third party projects will vanish or change significantly.
- Create a vendor directory in your project. The golang build system will look
$GOPATH/vendorfor packages before looking in the
- Add as a git submodule the project at the appropriate point under vendor. For GORM that’d be
vendor/github.com/jinzhu/gorm, similar to how
go getwould have put it in the src directory.
- Replace your
go getbuild step with a
git submodule updatecommand.
And voila, you’re done. Using git submodules means you can control which commit on the third party project you’re using, and by pointing it at your own mirror, you can ensure if your own infrastructure is there you can still deliver software regardless of external goings ons.
As a friend of mine pointed out, there are tools you can do to try and manage third party code into the vendor location, such as vndr, but the fewer tools I need to install to build a product the better – still, if you want to avoid the creation of directories yourself then you should give this a look.
The Go programming language has been around for about a decade now, but in that time I’ve not had much call to create new networked services, so I’d never given it a go (I find I can’t learn new programming languages in abstract, I need a project otherwise the learning doesn’t stick). However I had cause to redo some old code at work that had grown a bit unwieldy in its current Python + web framework de jour, so this seemed like a chance to try something new.
I was drawn to Go by the promise of picking up some modern programming idioms, particularly around making concurrency manageable. I’m still amazed that technologies like Grand Central Dispatch (GCD) that save programmers from worrying about low level concurrency primitive (which as weak minded humans we invariable get wrong) are not more widely adopted – modern machines rely on concurrency to be effective. In the Bromium Mac team we leaned heavily on GCD to avoid common concurrency pitfalls, and even then we created a lot of support libraries to simplify it even further.
Modern web service programming is inherently a problem of concurrency – be it on the input end when you’re managing many requests at once to your service, and on the back end when you are trying to off load long running and periodic tasks away from the request service path. Unfortunately the dominant language for writing webservices, Python, is known to be terrible at handling concurrency, so you end up offloading concurrency to other programs (e.g., nginx on the front end, celery on the back end), which works, but means you can only deal with very coarse grain parallelism.
Go seems to have been designed to solve this problem. It’s a modern language, with some C like syntax but free of the baggage of memory management and casting (for the most part), and makes concurrency a first class citizen in its design. Nothing it does is earth shatteringly new – the go routine concurrency primative is very old, and the channel mechanism used to communicate between these routines is standard IPC fair – but what it seems to pull off is putting these things together in a way that is very easy to leverage. It also lacks the flexibility of the aforementioned GCD to my mind, but ultimately it is sufficiently expressive that I find it very productive to write highly concurrent code safely. It actually makes writing web services that have such demands fun again, as you end up with a single binary that does everything you need, removing the deployment tedium of the nginx/python/celery pipeline. You can just worry about your ideas, which is really all I want to do.
Another nice feature is the pseudo object orientation system I Go. Go has two mechanisms that lead you in the same direction as traditional OO programming – structs and interfaces. Structs just let you define structs as you might in C, but you can do composition that gives you a sort of inheritance if you need it, and interfaces just define a list of function interfaces you can use on a struct. But an interface isn’t tied to a struct as it might be in a traditional OO, they’re defined separately. This seems weird at first, but is really quite powerful, and makes writing tests very easy (and again, fun) as it means you can “mock” say the backend object simply by writing an object that obeys an interface, rather than worrying about actual inheritance. Again, it’s nothing new, it’s just pulled together in a way that is simple and easy to be productive with.
The final nicety I’ll mention is another feature is an idiom that in the mac team at Bromium we forced on ourselves – explicit error handling and explicit returns of errors next to the valid result. This again makes writing code to handle errors really natural: this is important, as programmers are inherently lazy people and it’s a common cause of bugs in that the programmer simply didn’t think about error handling. Go’s library design and error type make this easy.
For all this, Go has its flaws. Out of a necessity to allow you to have values that may have no value, Go has a pointer type. But it also makes accessing concrete values and pointers look identical in most cases, so it’s easy to confuse those, which can occasionally lead to unexpected bugs, particularly when looping over things where you take the loop pointer rather than the value it’s pointing to. The testing framework is deliberately minimal, and the lack of class based testing means you can’t really use setup and teardown methods, but this leads to a lot of boiler plate code in your tests – this is a shame, as otherwise Go makes writing tests really easy. And let’s not get started on the package system I Go, which is opaque enough to be a pain to use.
It’s also a little behind say Python in terms of full stack framework support. The Go community seems against ORMs and Django style stacks, but that does mean it’s hard to justify its use if you’re writing a website for humans to use with any complexity. There is at least a usable minimal DB ORM in the form of GORM that saves you from writing SQL all the time.
But for all its flaws, I really have taken to Go, and I’ve written a small but reasonable amount of production quality code in it now, and I still find it a joy to use as it’s so productive. For writing backend web services, it’s a joy. There’s not enough mature framework support yet that I’d use it instead of Django/Python for a full user interactive website, but for IoT backends or such it’s really neat (in both senses).
If any of this sounds interesting to you then I can recommend The Go Programming Language book. Not only is it easy to read, it gives you good practical examples that let you see the power of its primitives very quickly. If you think I’ve got it wrong with any of my criticisms of the language, then do let me know – I’m still a relative newbie to golang and very happy to be corrected so I can be even more productive with it!
About 18 months ago I wrote something here about how I was trying to get better at playing guitar, and I was going to try post a video to youtube once a week with a new song snippet as a way of having some discipline. If you do recall that, you also know I didn’t do it (I think I managed one more after that post).
But the reason for not doing it was at least reasonable: I actually started taking lessons, and my teacher makes me practice daily, so the discipline sorted itself out, and saved you, dear reader, from lots of bad cover songs.
Instead you can watch some bad bits of me doing blues style improv from my last daily practice session, warts and all:
Now, I may not be giving Joe Bonamassa cause to question his career choices, but I look at this and am somewhat amazed how far I’ve managed to come in 18 months thanks to David, my guitar teacher. When I wrote that original post back in May I was just trying to copy bits of other songs, and here I am today able to throw down a 12 bar blues backing track and then ad-lib over it, even throwing in a bit of wah pedal, to my heart’s content (albeit in a slightly repetitive and formulaic way :).
Partly this is the direction David and I have been working towards – rather than learning to cover old songs or work towards grades I’ve just been trying to understand the building blocks for playing the blues. What is the grammar and vocabulary that makes up a song. I may not yet be writing more than basic sentences, but despite the fact that I feel occasionally learning a song might be more short term satisfying, it’s when I get time to do a little bit of ad-lib like above that it all pays off. Ask me to play a song and I’m hopeless, but give me a looper pedal and I can entertain myself for an age with things like this.
The closest I get to playing an actual song is things like this, where I’m riffing on the great Jeff Beck Group track Rock My Plimsoul (who turn were riffing off B.B. King’s Rock My Baby):
I’ve still a long way to go, of that I’ve no delusion – the open stages of Cambridge are in no danger of seeing me any time soon. But it’s nice to occasionally reflect that one has at least made some progress, even if I can’t play a tune on demand :)
I’ve been blogging a bunch about my brother’s band of late – mostly as there’s lot happening right now. After the success of their King Tut’s gig, they’ve just launched a kickstarter to get the song that was one of the starts of the set into a single:
If you like your rock music, go check this out and give them a nudge to help get the single out.
They played a six song set as part of a four band line up at King Tut’s that evening. They executed their songs flawlessly, they sounded great, and the audience packed (by far the biggest audience of the evening). They even had the audience singing along loud to their single Ghosts and the as yet unreleased Ocean Waves.
I was also a wee bit proud for another reason: Tristan used the guitar I built for him on stage. That guitar, which was a labour of love for the better part of a year, has featured both in the single IKARI recorded and now their live set: I feel truly honoured that this thing I’ve made has been a part of IKARI’s story. In software we have the concept of shipping our products: I’d say this guitar has definitely shipped :)
My computer usage has changed quite a bit over the last few years. Outside of work hours I now mostly use my iPad for casual computering, such as browsing, email, social media and so forth. My MacBook is reserved for heavier lifting, like bulk photography editing with Lightroom or Garageband loop creation.
This means, I now only open it up once or twice a week, or if I’ve been away on a trip, even less frequently. And coming back to it even after a couple of days is a fairly tedious and horrible experience.
And it’s all the fault of people like me (at least former me when I worked on apps).
It goes like this: I open up my computer, and immediately my password manager will need an update. Perhaps also my text editor. Or perhaps Little Snitch. I have to wade through a series of dialogs as usually at least one or two bits of software will be shouting “forget that task you wanted to do, I AM SO GREAT I HAVE NEW THINGS”, when all I want to do is some quick task. Oh, what’s that media editor app that’s cloud based, you’ve forgotten my login so I have to dig out my password again from my password manager (assuming it’s finished its update).
And so on and so forth. Mostly from apps that care about their user interface a lot, which is why I bought them in the first place.
Being a casual computer user is a fairly horrible user experience it turns out.
When I used this computer multiple times a day this didn’t bother me. Indeed, the periodic update was new and exciting. As indeed I felt when writing applications for people; I couldn’t wait to tell them about the awesome new things I’d just added for them! As developers our products are our things of which we’re rightly proud, and we’re eager to share. But we forget that our users aren’t like us a lot of the time. Oh, our echo chamber in twitter is just like us and reinforces the idea that we should be issuing an update every week or so. But outside of our social media bubble, things are more annoying.
It is made worse because I know things can be better. iOS updates my apps constantly without interrupting me. I just pick the device up and get on with my tasks. It’s painless and you forget it’s even happening unless you happen to review the updates list in the app store for some reason.
The alternative of not doing updates is terrible – mostly wearing my security hat for this. You really should keep your software up to date people.
But desktop app developers really do need to learn that your app isn’t actually the centre of your users world. Do update your app frequently, but give the user the option to just have them apply silently at some safe point. I know that’s hard, but solving hard problems is why your users give you money to make great products.
The obvious real time killer is when you turn on your computer and so many apps do this you spend half an hour writing a blog post about why they annoy you. What was it I was just about to do?
Super proud of my brother and his pals who together have formed the band IKARI, who have their debut single launch today, Ghosts.
You can snag it on iTunes and Spotify etc. They’ve done a superb job, a great professional production.
I’m also just a wee bit proud too that the guitar my brother is playing on the single and in the video is the one that I built for him: it’s amazing to have built something that enabled him in some small part to create this single.
If the single takes your fancy then IKARI are playing at King Tut’s in Glasgow on August 11th, go to their site to get tickets!
When I switched from my old Buell Ulysses adventure bike to the Brammo electric sports bike, the idea was that we’d hire a bigger bike occasionally for going longer trips (the Buell having seen us round both Wales and Scotland, not to say countless runs around the Norfolk coast). But in practice we didn’t do that, and we didn’t just miss the epic trips, the Brammo, as fun as it is, doesn’t even allow us midrange trips without logs of logistics around charging. So in the end we only ever were going out around the local countryside on the Brammo.
Thus, somewhat accidentally, we’ve now ended up with a new KTM Superadventure S 1290 adventure bike, so that Laura and I can go touring and exploring again.
Similar to the Buell, the KTM stands out in having some amazing engineering aspects to it. The dash is just a single TFT display, similar to an iPad mini in size, that lets you control everything you need. It has switchable engine and suspension dampening modes, that let me instantly move between touring, sports, town, and off-road optimised configuration. This may sound superfluous, but in the rain being able to drop the power and smooth the throttle is wonderful on a bike that can also do the fully loaded two up touring without breaking a sweat. I can tell it whether I have a pillion and/or luggage and it’ll similarly adjust.
Having done a weekend tour around Norfolk to complete the run-in interval before the first service, we can happily attest it makes a great touring bike. The functional but not pretty aluminium boxes are particularly nice. Most side boxes are side loading, whereas these are top loading, which is much more convenient, and you can fit a helmet in two of the three boxes. We also have fitted bags for the two side boxes which makes rocking up to your day’s destination and unloading a breeze.
I can also attest, despite being somewhat large as bikes go, it does off road, and I took the heaviest and most expensive bike I’ve ever owned for my first go at green lane riding:
My friend Wil from work is an experienced green lane rider, often commuting into work along these lanes in his BMW 1200GSA which he makes look like a wee 125 given how effortlessly he flicks it around, and he took me along his usual route so I could try the KTM in its other natural habitat. It was a lot of fun, and a lot of effort – standing up to ride offroad is not something I’m practiced at, and I suspect my technique is all off. But it was great, and the bike was more than happy despite its size and weight – though I think I’ll save up for some crash bars before I do another such excursion :) But it did enjoy it, and have signed up to be a member of the Trail Rider Fellowship that promotes sensible and sustainable green lane usage by motorcyclists.