Why do people always want to tell you that persisting a “domain model” is a many month spanning work item if you can’t generate the code?
Sorry folks, with all due respect to model driven software engineering, but that’s totally non-sense. It’s all about developer productivity. The first object is always the hardest. But if you follow the right approach and build good APIs and use the right tools than it’s just a matter of minutes for every other domain object.
Don’t get me wrong. I also use EMF and I love it. But some statements are just plain wrong.
9 thoughts on “Persisting Data”
You are soooooo right.
Different projects require different tools. I’ve been involved in lots of projects where EclipseLink makes a lot of sense, and a lot of projects where hacking simple SQL in PHP makes sense. I haven’t personally used it, but I’m sure there are a lot of projects where Teneo makes perfect sense.
It’s awesome, as software engineers, to have so many great (free) tools in our toolbelts!
“Persisting a domain model” is a bit wrong task in itself. The biggest issue is that normally the data outlives the version of the application (and even application itself) and shorter-lived entity should depend on longer-lived entity rather than reverse.
Constantine, I agree with you. Data usually lives longer then applications, especially when databases are involved. But data structures are allowed to evolve over time. You can refactor databases too. The refactoring could become necessary for scalability/storage reasons. A 1:1 mapping might not be possible anymore in such cases. It’s great if you have the right API then to catch this evolution in a layer that does not require touching the whole application. If this is a many months job something is wrong with your APIs.
Yes. Databases change as well, and data access layer should support such migration.
As for frameworks, the question whether to use them often boils down to balance of DRY and KISS principles. For small apps, custom DAO might be the way to go. But for bigger apps, frameworks like Hibernate could save a lot of development and testing efforts.
So what exactly is the “right approach”, then? And what “good APIs” is it that you want to build yourself before you can persist your data?
I for myself never want to manually write any boilerplate code by hand, as usually one can find a better use for developer productivity.
The right approach and APIs really depend on you and your software. For example, I developed a fairly simply framework for persisting objects into a database using Spring’s JDBC DAO support some time ago. Frankly, this framework just felt off a “business” feature. The feature took two weeks. Anyway, thereafter it was just a matter of minutes to bringing new objects into the database using this framework.
But that’s just one sample and one use-case. There are also other dependencies like tools and languages involved, etc. It really depends on your scenario. But a simple statement like “it will take you months if you don’t generate code” is just wrong.
As general as that statement is, it *must* be wrong 😉
We have that 300+ classes highly complex object model that would in fact do take us months to write by hand, I think, though. Having bidirectional associations managed by the generated code is more than nice to have, for example.
Right. That’s why I like EMF (for example). If you have such a complex model you typically have many associations and it’s great to generate those. IMHO it’s getting challenging if you need to modify generated code. But maybe that’s just my subjective experience and is a problem that a generator should address.