ModernDatabases
Move from Oracle or MS SQL to Postgres. Set up database replication for real-time clustering or disaster recovery across seismic zones. Move databases from on-premises to AWS or from AWS to Azure. Basically, everything database: relational, non-SQL or Redis.
Why will I even think of switching database?
A variety of reasons. Sometimes, your application is growing too rapidly and the licence fees are killing you. Sometimes, you may need to augment your transaction DB with some non-SQL full-text searchable DB. Sometimes, you need a very large, sharded DB cluster where you need replication and eventual consistency, not ACID properties.
Do I need to re-write code to move databases?
You often don’t. Moving between modern relational databases can often be done without re-writing your Java or dotNet or other code which holds the business logic. The stored procedures, if any, need to be edited or re-written. However, if you are moving some part of your data to a non-SQL database, then a lot of code re-write may be needed.
Why would I need a non-SQL database?
The first and simpler use-case is Redis as a cache. This is a very useful addition to a legacy application which knows its relational DB as its One True Source Of Truth. These caches boost performance, but code change is needed.
The second use-case is something like Solr or ElasticSearch for a full-text searchable database, which holds data other than transaction data. This shifts part of the general search and auto-complete load away from the transaction DB to a faster, lighter, clustered, replicated document.
The third use-case is something like CouchDB. Awesome distributed replication with eventual consistency. And for super-heavy scalable transaction loads, stick to relational but move to PlanetScale. They are re-defining what scalable cloud databases mean. We can help with all.
What if there are bugs in these modern DB?
Bugs exist in every large piece of software. The real issue is: does it impact you if you see none, even after extensive testing?
All the mission-critical databases we use have commercial support available from the product maintainers in case your corporate IT policy requires such support.
We have used these databases internally in our mission-critical projects and have never encountered a bug we couldn’t work around or get a fix for. No druids necessary for this magic.
We know modern databases
Edelweiss PG Migration
Edelweiss had a large application with several TB of data, growing at a few TB per year. This was using a proprietary DBMS and the size of the installation was sharply increasing. The database had some large stored procedures to scan and process the entire data store and create summary figures once a night, and this was hitting performance bottlenecks. We migrated the entire database to Postgres, simultaneously re-factoring the code to increase throughtput. The application code continued in dotNet, and needed almost zero changes.
UPSDM using Postgres
The Uttar Pradesh Skill Development Mission (UPSDM) needed a portal built to handle vocational training programmes for millions of unemployed youth of the state. This application experienced very heavy loads, with several dozen user registrations per second, sustained hour after hour. We used Postgres on AWS, with Provisioned IOPS, to deliver very high reliability and throughput from a single database instance.
Keurix
This application changed the on-site inspection sector for the civil construction industry in the Netherlands. It internally used MySQL on AWS clouds to handle all sorts of structured and unstructured data, sync’ing periodically with mobile apps running on iPads. Data included PDF of detailed blueprints, sometimes extending to 100s of MB per file. Automatic backups, horizontal scalability, high reliability, were all provided through our open source stack.