in-memory database?

  • I have a customer who has very high number of transaction during certain hours of the day. During these surges, a very high amount of inserts take place which put little pressure on the disk (RAID 10 SAS). He also uses full text search and updating the data from the new records in the FT index also take some time (couple of secs may be). he needs to remove this delay entirely. One approach he suggested was to use some kind of in-memory database. He understands that durability cannot be guaranteed due to this approach however he is more interested in immediate results than data loss. Not sure if this is possible in sql server.

    Here are details:

    sql server 2008 R2 SP1

    win 2008 OS

    db size - 230 GB

    RAM - 96 GB

    2 quad core CPU (hyperthreading enabled)

    RAID 10 SAS (Earlier he used SSDs which are supposedly faster but we observed lots of disk Q and hence moved to SAS. He also has array of SATA)

    Any insight will be helpful.



    Pradeep Singh

  • Are the database transaction logs on a different drive to data? Which drive do you see disk queues?

  • Yes the data and log files are on separate RAID arrays. There is no disk queue I would say right now (it was present while he was on SSD).



    Pradeep Singh

  • SQL Server doesn't have an in-memory setting. It works as it works and the disk is heavily involved. But if you're not seeing disk queuing, why would having everything in memory help?

    However, you can go to one of the in-memory databases. You'll gain on some of the functionality and performance in some areas. However, you'll lose lots of functionality in others. Shop carefully for the in-memory system and test each thoroughly with your code & structure to be sure you're not hurting yourself.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • In terms of the Full Text index, the changes are applied asyncronously and I suspect that no matter how fast your disks are, there's going to be some sort of scheduler that waits for changes on an interval that is probably more relevant to the overall latency of this. If it needs to be 100% identical to committed data in real-time, then it's not the right tool for the job.

    I'd question whether this is really the kind of scale to need an in-memory system (e.g. is it a real-time trading/telecoms/national grid type system with 5 figure transactions per second) and have you maxed out on disk speed/number etc./optimised all of the database design/procedures.

    Remember, that even in-memory RDBMS's (like Oracle TimesTen) back data to disk for recovery purposes, so you get a hit somewhere and boy, are they expensive. I don't know of any true OLTP database (e.g. not a read-only/caching system) that purely operates within volatile memory.

  • Grant Fritchey (1/20/2012)


    SQL Server doesn't have an in-memory setting. It works as it works and the disk is heavily involved. But if you're not seeing disk queuing, why would having everything in memory help?

    However, you can go to one of the in-memory databases. You'll gain on some of the functionality and performance in some areas. However, you'll lose lots of functionality in others. Shop carefully for the in-memory system and test each thoroughly with your code & structure to be sure you're not hurting yourself.

    I was thinking of creating a RAMDisk for this. Never tried it though.

  • I'd also point out that my note worthy databases regularly hit 5 figure tps without disk issues so it is possible to have this provided your environment is configured correctly. It may be that the server simply isnt up to the job that its in place for.

  • I was thinking of creating a RAMDisk for this. Never tried it though.

    Would you really want to completely destroy your database if you had a power outage, or had to reboot the server for patch updates?? They may not worry about the data loss, but I'd imagine the downtime/inconvenience of that would be huge.

    If it's inserting data at such a rate, do they really not care about the integrity of the data that's being inserted, or whether it'll survive at all?

  • MysteryJimbo (1/20/2012)


    I'd also point out that my note worthy databases regularly hit 5 figure tps without disk issues so it is possible to have this provided your environment is configured correctly. It may be that the server simply isnt up to the job that its in place for.

    Hi Jimbo,

    Definitely wasn't suggesting that SQL Server can't handle 5 figure transaction rates with the correct design/scale (would be interested to hear what your solution looks like by the way), just that that would be the minimum scale at which I'd be thinking about these specialist solutions.

  • HowardW (1/20/2012)


    I was thinking of creating a RAMDisk for this. Never tried it though.

    Would you really want to completely destroy your database if you had a power outage, or had to reboot the server for patch updates?? They may not worry about the data loss, but I'd imagine the downtime/inconvenience of that would be huge.

    If it's inserting data at such a rate, do they really not care about the integrity of the data that's being inserted, or whether it'll survive at all?

    I wouldnt. I'd be considering FusionIO or similar (which I already have).

    ps. (1/20/2012)


    He understands that durability cannot be guaranteed due to this approach however he is more interested in immediate results than data loss.

    If you're happy to take regular backups to a physical drive and restore every time you reboot, this "might" work.

    It'd be a last avenue of investigation though and will involve a software cost.

  • MysteryJimbo (1/20/2012)


    I wouldnt. I'd be considering FusionIO or similar (which I already have).

    Ok, yes, FusionIO sounds awesome, but I wouldn't group it in with in-memory databases - it's really just a specialist ultra-fast SSD that happens to use RAM in it's architecture and is non-volatile

  • HowardW (1/20/2012)


    MysteryJimbo (1/20/2012)


    I wouldnt. I'd be considering FusionIO or similar (which I already have).

    Ok, yes, FusionIO sounds awesome, but I wouldn't group it in with in-memory databases - it's really just a specialist ultra-fast SSD that happens to use RAM in it's architecture and is non-volatile

    I don't recall recommending it was grouped with in-memory dbs. I have access to around 12TB of fusion-io cards, thanks for letting me know what they are. I was stumped. 😀

  • MysteryJimbo (1/20/2012)


    I was thinking of creating a RAMDisk for this. Never tried it though.

    I had a client that used this for tempdb and the full-text catalog, worked well, and the data could always be recreated in case of disaster. For the record, my current client has 16.8TB Fusion-IO, mirrrored 😛

  • MysteryJimbo (1/20/2012)


    I don't recall recommending it was grouped with in-memory dbs. I have access to around 12TB of fusion-io cards, thanks for letting me know what they are. I was stumped. 😀

    I wasn't saying it for your benefit. We're in a thread about in-memory databases, so thought it was worth pointing out to avoid any confusion about the main topic. I have to say I'm also a bit confused here. You started off suggesting putting the DB's onto RAMDisk, saying you'd never done it yourself. When I pointed out the volatility, you said you meant Fusion-IO.

    Maybe we've just got our wires crossed, but I see it as a bit counter-productive to jump on anyone who mentions something you're already aware of on a public forum.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply