Watch, Follow, &
Connect with Us

For forums, blogs and more please visit our
Developer Tools Community.


Welcome, Guest
Guest Settings
Help

Thread: ClientDataSet dog slow when it gets big. What to do next?


This question is not answered. Helpful answers available: 1. Correct answers available: 1.


Permlink Replies: 10 - Last Post: Sep 17, 2014 4:42 AM Last Post By: Rael Bauer
Kevin Killion

Posts: 19
Registered: 2/3/03
ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 10, 2014 2:23 PM
I create a ClientDataSet as a scratch working array for a big task. There are 5 integer fields.

It's pretty darn snappy with 150,000 to 250,000 records, faster than I expected.

But it slows down awfully with 1,600,000 records. That's not a surprise, I guess, but what is the reason for the poor speed? Is it simply the nature of the task? Or is it running into virtual memory kind of problems? (And if the latter, how do I diagnose that?)

More importantly, what is the solution? Would a "proper" database solve the speed problem? (That's sticky as well, as I am not using any database server, and I can't find any examples at all of creating and using a database file at runtime that don't start with connecting to some server.)

Sorry for the naive database questions, but thanks for any tips!
Kevin
Linden ROTH

Posts: 467
Registered: 11/3/11
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 10, 2014 11:29 PM   in response to: Kevin Killion in response to: Kevin Killion
Kevin Killion wrote:
I create a ClientDataSet as a scratch working array for a big task. There are 5 integer fields.

It's pretty darn snappy with 150,000 to 250,000 records, faster than I expected.

But it slows down awfully with 1,600,000 records. That's not a surprise, I guess, but what is the reason for the poor speed? Is it simply the nature of the task? Or is it running into virtual memory kind of problems? (And if the latter, how do I diagnose that?)

More importantly, what is the solution? Would a "proper" database solve the speed problem? (That's sticky as well, as I am not using any database server, and I can't find any examples at all of creating and using a database file at runtime that don't start with connecting to some server.)

Sorry for the naive database questions, but thanks for any tips!
Kevin

Got to ask why a CDS why not some array of objects (or records) only 32 meg of memory (@32bit + over heads) so hardly virtual memory issue.

So how are you referencing the elements?

--
Linden
"Mango" was Cool but "Wasabi" was Hotter but remember it's all in the "source"
Cristian Peța

Posts: 157
Registered: 8/7/06
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 11, 2014 12:01 AM   in response to: Kevin Killion in response to: Kevin Killion
Kevin Killion wrote:
More importantly, what is the solution?
Do you need to cancel updates? If not, then it better to set LogChanges to false.
Or after you inserted a lot into table then call MergeChangeLog.

Best regards,
Cristian Peta
Vladimir Ulchenko

Posts: 248
Registered: 1/12/00
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 11, 2014 12:03 AM   in response to: Kevin Killion in response to: Kevin Killion
On Wed, 10 Sep 2014 14:23:35 -0700, Kevin Killion <> wrote:

It's pretty darn snappy with 150,000 to 250,000 records, faster than I expected.
But it slows down awfully with 1,600,000 records.

do you have any demo project demonstrating the effect?

That's not a surprise, I guess, but what is the reason for the poor speed? Is it simply the nature of the task? Or is it running into virtual memory kind of problems? (And if the latter, how do I diagnose that?)

besides CDS/midas-specific design/speed problems any pure in-memory dataset will suffer from performance problems as soon as its internal
storage will stop fit available memory and/or OS will swap it. you could get estimated memory size required by storage by multiplying
dataset's RecordSize property by RecordCount. besides that underlying ChangeLog (if logging turned on which itself slows down CDS) will
waste sufficiently large memory. actually I believe it shouldn't be that big to cause any memory shortage related problem for just 5 integer
fields

More importantly, what is the solution? Would a "proper" database solve the speed problem? (That's sticky as well, as I am not using any database server, and I can't find any examples at all of creating and using a database file at runtime that don't start with connecting to some server.)

or maybe just another (more efficient) in-memory dataset

--
Vladimir Ulchenko aka vavan
Cristian Peța

Posts: 157
Registered: 8/7/06
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 11, 2014 12:35 AM   in response to: Vladimir Ulchenko in response to: Vladimir Ulchenko
Vladimir Ulchenko wrote:
That's not a surprise, I guess, but what is the reason for the poor speed? Is it simply the nature of the task? Or is it running into virtual memory kind of problems? (And if the latter, how do I diagnose that?)

besides CDS/midas-specific design/speed problems any pure in-memory dataset will suffer from performance problems as soon as its internal
storage will stop fit available memory and/or OS will swap it.
For 5 integers I estimate about 50MB. With LogChanges to true it is double but I doubt this is the issue.
I experienced a big slowdown with less records (using changes log). And every update is much slower with a big change log.

Best regards,
Cristian Peta
Vladimir Ulchenko

Posts: 248
Registered: 1/12/00
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 11, 2014 2:14 AM   in response to: Cristian Peța in response to: Cristian Peța
On Thu, 11 Sep 2014 00:35:49 -0700, Cristian Peta <> wrote:

For 5 integers I estimate about 50MB. With LogChanges to true it is double but I doubt this is the issue.
I experienced a big slowdown with less records (using changes log). And every update is much slower with a big change log.

I also believe it is either poor design decision or just implementation glitch in midas.dll
IIRC in some tests there were virtually no difference in speed when I benchmarked mine midas implementation no matter whether LogChanges was
active or not. and in some tests it was as fast (or even faster) as kbmmt and/or AnyDAC memtable

--
Vladimir Ulchenko aka vavan
Wayne Niddery

Posts: 791
Registered: 4/14/98
Re: ClientDataSet dog slow when it gets big. What to do next?
Helpful
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 13, 2014 10:24 AM   in response to: Kevin Killion in response to: Kevin Killion
"Kevin Killion" wrote in message news:690511 at forums dot embarcadero dot com...

But it slows down awfully with 1,600,000 records. That's not a surprise,
I guess, but what is the reason for the poor speed? Is it simply the
nature of the task? Or is it running into virtual memory kind of
problems? (And if the latter, how do I diagnose that?)

More importantly, what is the solution? Would a "proper" database solve
the speed problem? (That's sticky as well, as I am not using any database
server, and I can't find any examples at all of creating and using a
database file at runtime that don't start with connecting to some server.)

I would never use any memory dataset as a database engine for this kind of
volume. Very small tables are fine of course, and they can also be very
valuable when used with an actual database (this is how they are used as
part of DataSnap).

There are lots of database engines that do not require a separate server to
be set up such as Nexus. The engine compiles directly into your application.
Note that in this mode they are not intended to be multi-user. If you need
multi-user then you want a proper database server that all clients connect
to.

--
Wayne Niddery
"You know what they call alternative medicine that has been proven to work?
Medicine." - Tim Minchin

Kevin Killion

Posts: 19
Registered: 2/3/03
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 14, 2014 11:26 AM   in response to: Wayne Niddery in response to: Wayne Niddery
I would never use any memory dataset as a database engine for this kind of
volume.

Yes, I appreciate that. But why? Or, a better question, at what point is the size a problem?

If a large size will run into memory issues, how can I judge how much is too much?

What measures can I use or what indications are there that memory is becoming a problem?


There are lots of database engines that do not require a separate server to
be set up such as Nexus. The engine compiles directly into your application.

Great suggestion, thanks! And checking now I see that Nexus is even free for its embedded version.

You mentioned that there are lots of such engines that do not require a separate server and compiled directly into the app. Besides Nexus, what other good, fast ones are recommended?

Thanks,
Kevin
Wayne Niddery

Posts: 791
Registered: 4/14/98
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 14, 2014 7:58 PM   in response to: Kevin Killion in response to: Kevin Killion
"Kevin Killion" wrote in message news:691156 at forums dot embarcadero dot com...
I would never use any memory dataset as a database engine for this kind
of
volume.

Yes, I appreciate that. But why? Or, a better question, at what point is
the size a problem?

When you open a client dataset on 1.6 million records, it reads the entire
file into memory. When you close it, it must write all 1.6 million records
back to disk. Any even-half-assed database engine is not going to do this.
You can force a real database engine to read everything by issuing a "select
* from table;" but that is not the default behaviour, and even then when you
add or update a record it does not need to write ALL records back to disk,
it can write that one. A database engine uses indexes effectively in order
to read as little record data as possible, not just to find or order
particular records already in memory.

You mentioned that there are lots of such engines that do not require a
separate server and compiled directly into the app. Besides Nexus, what
other good, fast ones are recommended?

Been quite awhile now since I've looked at what all is available. Besides
Nexus I can think of Firebird as another free database that can also
optionally be compiled directly in. I believe you can do so now with
Interbase too but it requires payment per user.

--
Wayne Niddery
"You know what they call alternative medicine that has been proven to work?
Medicine." - Tim Minchin
Vladimir Ulchenko

Posts: 248
Registered: 1/12/00
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 15, 2014 12:58 AM   in response to: Kevin Killion in response to: Kevin Killion
On Sun, 14 Sep 2014 11:26:53 -0700, Kevin Killion <> wrote:

I would never use any memory dataset as a database engine for this kind of
volume.

Yes, I appreciate that. But why? Or, a better question, at what point is the size a problem?

decently written in-memory datasets have no problems handling such moderate amounts of data
for example test app built in d2007 using custom midas version filled cds (only 5 integer fields) with 1,600,000 records in just about 4
seconds with logging turned on
with native midas version 14 it took ~12 seconds to append just 250000 records

in both cases amount of RAM utilized was nothing worth to be concerned

so it's just a matter of quality and/or design of specific library

If a large size will run into memory issues, how can I judge how much is too much?

What measures can I use or what indications are there that memory is becoming a problem?

perhaps only tests may reveal such problems but given the past history I would strongly advice against using native cds/midas for any
non-trivial work and amounts of data

Great suggestion, thanks! And checking now I see that Nexus is even free for its embedded version.

You mentioned that there are lots of such engines that do not require a separate server and compiled directly into the app. Besides Nexus, what other good, fast ones are recommended?

if you don't really need all those extra goodies provided by full-blown db engine such as Nexus (which is great I believe and written by
great developers) you may find freely available kbmMT dataset fit your needs

--
Vladimir Ulchenko aka vavan
Rael Bauer

Posts: 228
Registered: 10/10/02
Re: ClientDataSet dog slow when it gets big. What to do next?  
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 17, 2014 4:42 AM   in response to: Wayne Niddery in response to: Wayne Niddery
On 2014/09/13 07:24 PM, Wayne Niddery wrote:
There are lots of database engines that do not require a separate server to
be set up such as Nexus.

sqlite.org is very popular nowadays. There are freeware (aducom.com) and
commercial components (devart.com or firedac)..
Legend
Helpful Answer (5 pts)
Correct Answer (10 pts)

Server Response from: ETNAJIVE02