Are we reinventing the future and ignoring the past?

I was recently talking to a good friend about his startup and some of the technical challenges he
was having. His app is a lovely SPA written in Angular with a Rails backend. The issue he was having was one of
collaboration and synchronisation.

If the user opens 4 additional tabs into the app, how does he go about making sure they stay in sync with the initial
one, as each one now has it's own copy of the data structures in memory. The same problem happens if two people are
editing the same entity on separate machines.

Some of the ideas we came up with when brainstorming it included using websockets (with something like SignalR or Meteor)
to push the changes out to all interested clients; push a dirty flag out to force/tell the user to refresh; or patch
the internal memory structures on the fly.

All of them are possible, but none of them resulted in an "ah-ha!" moment.

side note: he's looking for a (coding!) CTO, in London I suspect, so if you are interested, get in touch with him

That, along with this recent article on MongoDB which is doing the rounds,
got me thinking: Why are we not learning from the past?

I heard from somewhere[1] that in the finance industry - Wall St, The City et al - the (working) life of a trader is slightly
shorter than the boom-bust cycle. So any given trader will only ever seen one boom and one bust in their careers, and
generally, they are going from a bust to a boom. Once the boom finishes, they get out, and then the new crop is coming in.

As we've seen over the past 13+ years, this is not good for the global markets.

I'm seeing similar things in software, too. We started with mainframes, then most of the processing moved out to the
client with PCs and client-server, then back to the central server with three-tier and smart clients, then back to the
server with web 1.0 and ASP/citrix, and finally moving back to the client with web 2.0 and SPAs. There are not many developers working
at the moment (as a percentage) who have seen more than 2 of these - usually, web 1.0 and web 2.0.

Are we about due for a pull back to the server side? The ascent of Node.JS and Rails' Turbolinks
feels like it is - you still need a good front end, you always did, but the state and source of truth is slowly moving
back to the back end. Almost ubiquitous, reliable network connections also makes keeping a (semi)persistent connection possible.

With a pure SPA, where the client is controlling the data and sending commands back to the server, there is no longer a
single source of truth for your data. The whole iCloud CoreData debacle[2] really showed that this isn't an ideal
situation. Sure, you can make it work in some cases with locking and user notifications, but in the general case, it's a
very hard thing to get perfect.

In that MongoDB / RDBMS article, the author says

But just remember that these kids think they’re solving problems that IBM (et al) solved quite literally before they
were born in some cases, and the features are probably already there in your existing database/technology stack

Complexity and scale are increasing, but sometimes the solutions we threw out 15 years ago, polished up for the new
situation, are the modern solution we are looking for.


  1. Not sure where, sadly. Ping me if you know. ↩︎

  2. Vs something like Parse and helios where the server is the source of truth ↩︎

Nic Wise

Nic Wise

Auckland, NZ