12. February 2015

We can't just use Google

Google is an amazing company. They create incredible technology, often in a very open way. They are not only creating great tools now, but they’re pushing the frontiers of humanity. Self driving cars, for example, are a transormative technology that will completely change our cities and how we think about transportation.

They also have the best search engine product on the market. It makes it trivial to find the information you need immediately. You can outsource a surprising amount of your mental faculties to Google.

more

24. November 2014

SLC

This is a response to the Strongloop post on nodeJS deployment.

SLC looks like a really interesting tool. Bear in mind that I haven’t dug through the code yet. strong-build seems well thought out. The kinds of things it deals with are problems that bite you really hard at the times you can least afford to be bitten. When writing Serious Business applications you simply cannot let a third party registry (npm, github, etc) going down affect your builds.

I would say that running your own registry is a perfectly acceptable way to deal with this, though. What you’re trying to do is put all the important things inside your locus of control. Not everything has to be files on disc.

strong-deploy seems fine, a little bit of sugar to abstract away deployment commands.

strong-pm sounds good. A thing that instruments and manages your node processes. Great!

After that, though, the blog post ended and I got really confused. What orchestrates the strong-pm processes across different machines? This setup seems to assume that you will only deploy processes to one computer at a time. Not only is that a bad idea for availability, it’s not going to work very well at scale.

In drawing analogies to my own system, strong-pm is Cavalry and Field-Marshal and Quartermaster are missing.

Also, it’s good that SLC is built of composable interfaces but it’s unclear whether or not strong-pm can handle languages other than NodeJS. I am hopeful they’ve made it agnostic, but the docs are strangely silent on the issue and it doesn’t get a mention in the blog post.

So, great start, but like so many other open source PaaS’ish tools it completely misses the point, in my opinion. There is still the opportunity to make it great by adding an orchestration tool.

21. November 2014

Security and Privacy on the Internet

The internet is a dangerous place.

We’ve known for years about MITM, XSS, injection, hijacking, etc etc. These are well understood threats. The fact that so many applications are still vulnerable to them despite them being well known is the subject for another post.

The security and privacy landscape of the internet has recently bucked pretty dramatically underneath us. I have been concerned about this and have been following it closely. I recently spoke on the subject to a group of web developers and, after the evening was over, I got this tweet:

@davidbanham I enjoyed tonight’s SydJS presentation. Meant to ask, but I couldn’t find you, what are good beginning resources for security?

And I had no good answer. I delayed responding while I googled around for something that would fit the bill, but came up blank. A few days later, I found this talk by Douglas Crockford. This talk is well worth an hour of your life. Douglas is an engaging speaker whose intellectual curiosity is infectous. He’s a geninunely interesting guy to be around and in 2012 when this talk was recorded it was great advice. It still is great advice, but the situation has gotten so much more complex since then. There was a moment when he made reference to trusting his server. So did I, in 2012.

This talk, like all of the security advice I could find, is centered around ensuring that the data coming into and out of your Trusted Bastions of Computing is valid and safe. If in the year A.D. 2014 you still believe that you can trust what happens on your server and within your datacentre you have not been paying attention.

The present reality is that state actors have the technical and legal capability to intercept communications in transit across public and private networks. We now know that they do this without any suspicion of wrongdoing. That is very troubling for the ideals of jurisprudence and the pursuit of privacy.

So what practical advice exists for the people at the coalface of this situation? What do the people building the applications that transit data across these hostile networks do in order to safeguard the trust their users have placed in them? How do we square the ethical quandry associated with assuring our users we will keep their data safe in an uncertain world? Where do I tell Ryan to go for the answers he seeks?

I am making an effort to add to the body of knowledge with projects like Under the Radar. I will continue attempting to act ethically towards my clients and my users.

This is a call to arms to my fellow developers to recognise the deficit and do your best to rectify it.