The Core Problem With Magento

I stumbled across this bit while looking for the Mythical Man-Month, actually. This passage describes exactly what’s wrong with Magento. I suspect eBay will fix it – or buy yet a third commerce company:

“[D]evelopers are sometimes tempted to bypass the RDBMS, for example by storing everything in one big table with two columns labelled key and value. While this entity-attribute-value model allows the developer to break out from the rigid structure imposed by a relational database, it loses out on all the benefits, since all of the work that could be done efficiently by the RDBMS is forced onto the application instead. Queries become much more convoluted, the indexes and query optimizer can no longer work effectively, and data validity constraints are not enforced. Such designs rarely make their way into real world production systems, however, because performance tends to be little better than abysmal, due to all the extra joins required.” — (“Inner-Platform Effect“)

If that doesn’t describe the exact problem, I don’t know what does. Everything else can be fixed: documentation improved, extensions can become simpler to create, and with every release, bugs are shaken out.

I suspect the inner core of Magento will have to be migrated to a VM-based solution and much of the database normalized before it can be effectively used as/ as part of eBay’s X.commerce platform. Read about how Twitter was forced to abandon Ruby and go to Scala for many of its requirements here: Twitter on Scala. I see no reason not to go this route: the product is clearly destined to be a SaaS offering.

EAV modeling does provides a flexible way of giving meta-data shape and form to the wide variety of sale-able products in the world. It’s an attempt to be an all-describing and all-encompassing, flexible solution for everything, everywhere.

Alas, it’s heavy. In this particular case it’s in PHP (argue amongst yourselves if you must). And applying EAV to things like customers, SKUs and orders was – and is – a mistake – hence we have constructs like the “flat” tables that must be indexed, for example, just to make the thing perform reasonably well.

Why not write an abstraction layer that can build normalized database table structures and preserve the performance advantages? Seems doable to me. Perhaps the era of always running interpreted code – instead of intermediate compiling – has spoiled us to the point that we’d sacrifice performance rather than wait for a build (as of said tables) – or do we just not even have the notion because we’ve never compiled/ built before?Some things are immutable. The Internet detests that notion. However, performance and reliability often like things that stay the same, even if we have to wait a little for the compiler.

Heroku ‘uninitialized constant Rake::DSL’

I ran into this problem while learning how to work with Heroku. (Heroku had some pretty big news for themselves this week, announcing Yukihiro Matsumoto had joined them as “Chief Architect, Ruby”).

The problems happens after you’ve pushed your app up, when you’re trying to run your rake db:migrate.

So I found the solution in one of the posts over here: at stackoverflow, … so if this doesn’t work for you, try one of the other answers (shrug).

Basically, I had to use an older version of rake. So in my Gemfile, I added:

# Hack to workaround Heroku 'uninitialized constant Rake::DSL'
gem "rake", "0.8.7"

Then a few commands to get ourselves up to date:

$ bundle update rake
$ git commit -a -m "Use 0.8.7 of rake"

And then of course re-run our db:migrate:

$ heroku rake db:migrate