Despite rumours, my day job isn't writing iPhone apps (sadly). I have been creating websites and doing general software development as a "day job" well before I started with MonoTouch and iPhone apps.
In the last 12 months - since we got back from our mega-trip back to New Zealand - Leonie and I (and a number of others) have been working on a rewrite and relaunch of Universal Music Japan, and on thursday, it's went live.
We started out with a huge specifications document from a Japanese company, which had taken 12 months to produce, and other than 400 pages of text, had very little to show for it. It was expected to take another 24 months to build it.
So, out with the old, and in with the new!
(this is the old site)
(this is the new site)
Some of the challenges we faced, and some numbers
-
The site consists of 16 labels, around 1600 individual artists, around 200,000 products. The umbraco database has around 65,000 nodes. At current traffic levels, it gets around 1.8m visits a month and around 6m pageviews.
-
All served off 2 servers (8GB / 4 cores, which are mostly idle), a CDN, Umbraco 4.7.2 and RavenDB (for the product database). The RavenDB is around 8GB, and SQL Server is around 2GB per environment. The media (images etc) is around another 8GB.
-
The site integrates with various external parties, including the Universal Music store, Amazon, iTunes, HMV, Tower Records ("big in Japan"), and various ring-tone and ring-back services. It also has QR codes. And yes, people actually use them.
-
As far as I know, this is a huge site by Umbraco standards. The umbraco.config (local xml cache) is around 80meg, we have over 65k nodes, and it takes the app pool about 3 mins to spin up and prime the cache.
-
Some things people take for granted - publishing the entire site in one go, removing a field from a document type - we can't do (there is a workaround for some of it, and it takes a long time to process). We had multiple issues with Umbraco Courier, but Per has been very responsive in getting bugs and problems fixed - thanks!
-
Maintenance - the editor site is used by a team of about 20 - needs a lot more horsepower than the public facing site. The editor machine uses 16GB / 4 cores, and often uses all of that. Luckily we use cloud-based hosting (private cloud from Vnetrix), so we were able to scale that up with just a reboot.
-
Working with a distributed team (development in the UK, customer is in Japan) isn't too bad, but working with another language and culture has been a challenge. This is something I don't think anyone should under estimate. Without our translator, Nana, we'd be totally screwed, and even with her, there are lots of things which just don't translate - assumptions on how the other party is thinking which we would normally take for granted.
-
All the artists (and most of the other pages) are templated and widget-ed, so the editors can change the layout at will and move things around the page. They also have full CSS/image/js control over the artist sites if they want to. They have done this for some of the tier-1 artists, eg the birthday, naoto-inti-raymi, kara, girls generation. But not for smaller artists, eg okamura yasuyuki. They can pick from a list of pre-done themes if they dont want to do a fully bespoke site, and also upload header images and logos.
-
We possibly went around things backwards, which didn't help. We designed (and rolled out) some of the artist sites first, then moved to the other sections. Going from top down would have been better.
-
We made extensive use of Powerup for deployments. We can now deploy to 4 sites (3 machines) with 4 commands (one for each site). UAT is deployed directly from TeamCity (manually, at the press of a button). Kudos to Marcel duPreez and Andrew Revell for this.
-
We have a number of automatic bits in the CMS, to make it as easy for the content editors as possible, eg news is auto-filed by date, if you make a new artist/genre/subgenre/etc, it will build most of it for you from a template. We can upload a whole label's worth of artists (and auto-create their whole site) using a spreadsheet.
We still have a fair bit to do on the site. Most of it is around bits we pushed out of the release, and also some more performance tuning, but overall, the site is running well.