Watch, Follow, &
Connect with Us

For forums, blogs and more please visit our
Developer Tools Community.


Welcome, Guest
Guest Settings
Help

Thread: Problem with empty detail datasets and cached updates



Permlink Replies: 6 - Last Post: Sep 18, 2017 4:36 AM Last Post By: jesu ortega
Brent Rose

Posts: 126
Registered: 9/23/00
Problem with empty detail datasets and cached updates
Click to report abuse...   Click to reply to this thread Reply
  Posted: Jul 29, 2017 4:36 PM
Hi

I have m/d queries with parameter based link where I want to use cached
updates (Tokyo).

As I scroll the master records, the transaction is opened and closed for
each new master record (whereas with range based m/d you already have
all the possible detail records, so there is no need to open/close the
transaction again). I understand this, as far as it goes.

The detail datasets are "cached on demand" somehow, so there is
apparently no need to open/close the transaction again when scrolling
back to that particular master record.

THE PROBLEM/EXCEPTION: When scrolling encounters a master without any
detail records, a transaction is ALWAYS opened/closed to re-query the
detail records. I assume that this might be due to the fact that FireDAC
is deciding the detail dataset has not actually been retrieved (when it
really has) because the RecordCount = 0? If this is not a bug, why would
it do that?

Anyway, this creates an issue where the user deletes all the detail
records for a given master. As the detail dataset is now empty, when you
scroll back to the master, it is re-queried (transaction opened/closed)
and, of course, all the same detail records re-appear (from the as yet
unchanged database table)!

Q: How can this sort of cached update work, if you cannot effectively
stop FireDAC re-querying already retrieved detail data?

Q: Is this detail dataset "cache" otherwise persistent (no matter how
many master records there are)? ie Are there any other circumstances
where the same detail dataset will be re-queried with no means to
prevent it?

Q: Is there a way to fetch ALL the relevant detail records for the
retrieved masters ONCE at the outset? This would perhaps obviate the
need to re-query detail data.

I have looked at the various FetchOptions, but no change from the
defaults seems to apply to this problem.

NOTE: My situation is where range-based m/d is not practical because
even with a small number of master records retrieved, the detail table
has over 1.5 million detail records. I cannot retrieve them all, and
have no way to efficiently identify just the detail records for those
masters that may be retrieved.

I hope I am just missing some important aspect of FireDAC that means
cached updates will actually work in this context, and appreciate any
help someone may be able to provide. Thanks. :-)

Brent Rose
Brent Rose

Posts: 126
Registered: 9/23/00
Re: Problem with empty detail datasets and cached updates
Click to report abuse...   Click to reply to this thread Reply
  Posted: Aug 1, 2017 2:05 PM   in response to: Brent Rose in response to: Brent Rose
On 30/07/2017 11:36 AM, Brent Rose wrote:
Hi

I have m/d queries with parameter based link where I want to use cached
updates (Tokyo).

As I scroll the master records, the transaction is opened and closed for
each new master record (whereas with range based m/d you already have
all the possible detail records, so there is no need to open/close the
transaction again). I understand this, as far as it goes.

The detail datasets are "cached on demand" somehow, so there is
apparently no need to open/close the transaction again when scrolling
back to that particular master record.

THE PROBLEM/EXCEPTION: When scrolling encounters a master without any
detail records, a transaction is ALWAYS opened/closed to re-query the
detail records. I assume that this might be due to the fact that FireDAC
is deciding the detail dataset has not actually been retrieved (when it
really has) because the RecordCount = 0? If this is not a bug, why would
it do that?

Anyway, this creates an issue where the user deletes all the detail
records for a given master. As the detail dataset is now empty, when you
scroll back to the master, it is re-queried (transaction opened/closed)
and, of course, all the same detail records re-appear (from the as yet
unchanged database table)!

Q: How can this sort of cached update work, if you cannot effectively
stop FireDAC re-querying already retrieved detail data?

Q: Is this detail dataset "cache" otherwise persistent (no matter how
many master records there are)? ie Are there any other circumstances
where the same detail dataset will be re-queried with no means to
prevent it?

Q: Is there a way to fetch ALL the relevant detail records for the
retrieved masters ONCE at the outset? This would perhaps obviate the
need to re-query detail data.

I have looked at the various FetchOptions, but no change from the
defaults seems to apply to this problem.

NOTE: My situation is where range-based m/d is not practical because
even with a small number of master records retrieved, the detail table
has over 1.5 million detail records. I cannot retrieve them all, and
have no way to efficiently identify just the detail records for those
masters that may be retrieved.

I hope I am just missing some important aspect of FireDAC that means
cached updates will actually work in this context, and appreciate any
help someone may be able to provide. Thanks. :-)

Brent Rose

It seems that, as surmised, FireDAC is re-quering the detail dataset
whenever the detail RecordCount is 0.

This is the piece of code in FireDAC.Comp.DataSet where it fetches
details for that reason (line 1940 in CheckDetailRecords procedure of
XE5 source):

if not Active or (FSourceView = nil) or (FSourceView.Rows.Count = 0) then
lChanged := FetchDetails(True);

If the row count condition is removed, then details are not fetched for
ANY master. If it is left in, it is impossible to manage the situation
where the user scrolls back to a master record for which the detail
dataset has been emptied (because the original detail records are
reinstated). As this is a circumstance that is reasonably likely to
arise, this issue is a bit of a show stopper :-(

Is there perhaps some other indicator that a detail dataset has already
been retrieved, even when empty? Clearly the row count is inadequate.
jesu ortega

Posts: 68
Registered: 9/28/01
Re: Problem with empty detail datasets and cached updates
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 6, 2017 4:15 AM   in response to: Brent Rose in response to: Brent Rose
I'm using range based Master-Detail and have just found the same bug. I'm in a master record with 2 details, scroll to the next one which has no details, scroll back to the previous one and now it shows 4 details (each record is shown twice). I repeat the process and now there are 6 details, 8 details,....

Using Firedac monitor I've confirmed that if I scroll between records that have details, nothing is executed in the database (as it should be) but when I scroll to a record that has no details, the details of the previous record are recovered again.

Can you put a link to the report in Quality Central so I can vote?
TIA

Edited by: jesu ortega on Sep 6, 2017 4:37 AM
Brent Rose

Posts: 126
Registered: 9/23/00
Re: Problem with empty detail datasets and cached updates [Edit]
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 9, 2017 1:19 PM   in response to: jesu ortega in response to: jesu ortega
Hi Jesu

Issues already added:

RSP 18733
RSP 18739
RSP 18740

It still surprises me that these issues do not seem to be much of a
problem to other developers (ie not a lot to be seen on the subject in
the forum). Afterall, the issues have been long standing.

It is also very frustrating that such a highly developed and fully
featured connectivity package is so crippled in this way as to make it
useless. Well, that might be a bit harsh, but still... for me it is a
show-stopper :-(

On 6/09/2017 11:39 PM, jesu ortega wrote:
I'm using range based Master-Detail and have just found the same bug. I'm in a master record with 2 details, scroll to the next one which has no details, scroll back to the previous one and now it shows 4 details (each record is shown twice). I repeat the process and now there are 6 details, 8 details,....

Using Firedac monitor I've confirmed that if I scroll between records that have details, nothing is executed in the database (as it should be) but when I scroll to a record that has no details, the details of the previous record are recovered again.

Can you put a link to the report in Quality Central so I can vote?
TIA

Edited by: jesu ortega on Sep 6, 2017 4:37 AM
jesu ortega

Posts: 68
Registered: 9/28/01
Re: Problem with empty detail datasets and cached updates
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 18, 2017 4:36 AM   in response to: jesu ortega in response to: jesu ortega
I've been investigating why I hadn't seen this before, and it seems to happen only in some forms. I have a form with this code:

with QueryMaster do
begin
  Close;
  Params[0].AsString := myparam;
  Open;
end;
with QueryDetail do
begin
  Close;
  Params[0].AsString := myparam;
  Open;
end;


that shows the error. If I change it to

with QueryMaster do
begin
  Close;
  Open;
end;
with QueryDetail do
begin
  Close;
  Open;
end;
with QueryMaster do
begin
  Filter := 'myfield = ' + QuotedStr(myparam);
  Filtered := true;
end;


the result is the same and the error doesn't happen. Of course, that is not practical if the table is huge.
Pablo Anizio

Posts: 10
Registered: 6/10/08
Re: Problem with empty detail datasets and cached updates
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 10, 2017 3:10 PM   in response to: Brent Rose in response to: Brent Rose
Brent Rose wrote:
Hi

I have m/d queries with parameter based link where I want to use cached
updates (Tokyo).

As I scroll the master records, the transaction is opened and closed for
each new master record (whereas with range based m/d you already have
all the possible detail records, so there is no need to open/close the
transaction again). I understand this, as far as it goes.

The detail datasets are "cached on demand" somehow, so there is
apparently no need to open/close the transaction again when scrolling
back to that particular master record.

THE PROBLEM/EXCEPTION: When scrolling encounters a master without any
detail records, a transaction is ALWAYS opened/closed to re-query the
detail records. I assume that this might be due to the fact that FireDAC
is deciding the detail dataset has not actually been retrieved (when it
really has) because the RecordCount = 0? If this is not a bug, why would
it do that?

Anyway, this creates an issue where the user deletes all the detail
records for a given master. As the detail dataset is now empty, when you
scroll back to the master, it is re-queried (transaction opened/closed)
and, of course, all the same detail records re-appear (from the as yet
unchanged database table)!

Q: How can this sort of cached update work, if you cannot effectively
stop FireDAC re-querying already retrieved detail data?

Q: Is this detail dataset "cache" otherwise persistent (no matter how
many master records there are)? ie Are there any other circumstances
where the same detail dataset will be re-queried with no means to
prevent it?

Q: Is there a way to fetch ALL the relevant detail records for the
retrieved masters ONCE at the outset? This would perhaps obviate the
need to re-query detail data.

I have looked at the various FetchOptions, but no change from the
defaults seems to apply to this problem.

NOTE: My situation is where range-based m/d is not practical because
even with a small number of master records retrieved, the detail table
has over 1.5 million detail records. I cannot retrieve them all, and
have no way to efficiently identify just the detail records for those
masters that may be retrieved.

I hope I am just missing some important aspect of FireDAC that means
cached updates will actually work in this context, and appreciate any
help someone may be able to provide. Thanks. :-)

Brent Rose

I created a temporary solution for the case - Change the procedure CheckDetailRecords in FireDAC.Comp.DataSet:

procedure TFDDataSet.CheckDetailRecords(AEnforce: Boolean);
function HasDeleted: Boolean; //Added
begin //Added
FilterChanges := [rtDeleted]; //Added
Result := (RecordCount > 0); //Added
FilterChanges := [rtUnmodified, rtModified, rtInserted]; //Added
end; //Added
function FetchDetails(AAll: Boolean): Boolean;
begin
Result := False;
if DoIsSourceOnline and (
DoMasterParamDependent(FMasterLink.Fields) and
DoMasterParamChanged(FMasterLink.Fields) or
((FTable = nil) or (FTable.Rows.Count = 0))
) then begin
if not (fiDetails in FetchOptions.Cache) then
DoClearBeforeRefetch;
DoCloseSource;
DoMasterParamSetValues(FMasterLink.Fields);
FSourceEOF := False;
FUnidirRecsPurged := 0;
FRecordCount := -1;
DoOpenSource(True, False, (FTable = nil) or (FTable.Columns.Count = 0));
InternalFetchRows(AAll, False);
Result := True;
end;
end;
var
lChanged: Boolean;
begin
lChanged := False;
if DataSetField = nil then begin
if AEnforce or
not MasterSource.DataSet.IsEmpty and
not (MasterSource.DataSet.State in [dsInsert, dsSetKey]) then begin
if fiDetails in FetchOptions.Cache then begin
if not Active or (FSourceView = nil) or ((FSourceView.Rows.Count = 0) and (not HasDeleted)) then //Modified

Brent Rose

Posts: 126
Registered: 9/23/00
Re: Problem with empty detail datasets and cached updates
Click to report abuse...   Click to reply to this thread Reply
  Posted: Sep 11, 2017 4:48 AM   in response to: Pablo Anizio in response to: Pablo Anizio
On 11/09/2017 10:10 AM, Pablo Anizio wrote:
Brent Rose wrote:
Hi

I have m/d queries with parameter based link where I want to use cached
updates (Tokyo).

As I scroll the master records, the transaction is opened and closed for
each new master record (whereas with range based m/d you already have
all the possible detail records, so there is no need to open/close the
transaction again). I understand this, as far as it goes.

The detail datasets are "cached on demand" somehow, so there is
apparently no need to open/close the transaction again when scrolling
back to that particular master record.

THE PROBLEM/EXCEPTION: When scrolling encounters a master without any
detail records, a transaction is ALWAYS opened/closed to re-query the
detail records. I assume that this might be due to the fact that FireDAC
is deciding the detail dataset has not actually been retrieved (when it
really has) because the RecordCount = 0? If this is not a bug, why would
it do that?

Anyway, this creates an issue where the user deletes all the detail
records for a given master. As the detail dataset is now empty, when you
scroll back to the master, it is re-queried (transaction opened/closed)
and, of course, all the same detail records re-appear (from the as yet
unchanged database table)!

Q: How can this sort of cached update work, if you cannot effectively
stop FireDAC re-querying already retrieved detail data?

Q: Is this detail dataset "cache" otherwise persistent (no matter how
many master records there are)? ie Are there any other circumstances
where the same detail dataset will be re-queried with no means to
prevent it?

Q: Is there a way to fetch ALL the relevant detail records for the
retrieved masters ONCE at the outset? This would perhaps obviate the
need to re-query detail data.

I have looked at the various FetchOptions, but no change from the
defaults seems to apply to this problem.

NOTE: My situation is where range-based m/d is not practical because
even with a small number of master records retrieved, the detail table
has over 1.5 million detail records. I cannot retrieve them all, and
have no way to efficiently identify just the detail records for those
masters that may be retrieved.

I hope I am just missing some important aspect of FireDAC that means
cached updates will actually work in this context, and appreciate any
help someone may be able to provide. Thanks. :-)

Brent Rose

I created a temporary solution for the case - Change the procedure CheckDetailRecords in FireDAC.Comp.DataSet:

procedure TFDDataSet.CheckDetailRecords(AEnforce: Boolean);
function HasDeleted: Boolean; //Added
begin //Added
FilterChanges := [rtDeleted]; //Added
Result := (RecordCount > 0); //Added
FilterChanges := [rtUnmodified, rtModified, rtInserted]; //Added
end; //Added
function FetchDetails(AAll: Boolean): Boolean;
begin
Result := False;
if DoIsSourceOnline and (
DoMasterParamDependent(FMasterLink.Fields) and
DoMasterParamChanged(FMasterLink.Fields) or
((FTable = nil) or (FTable.Rows.Count = 0))
) then begin
if not (fiDetails in FetchOptions.Cache) then
DoClearBeforeRefetch;
DoCloseSource;
DoMasterParamSetValues(FMasterLink.Fields);
FSourceEOF := False;
FUnidirRecsPurged := 0;
FRecordCount := -1;
DoOpenSource(True, False, (FTable = nil) or (FTable.Columns.Count = 0));
InternalFetchRows(AAll, False);
Result := True;
end;
end;
var
lChanged: Boolean;
begin
lChanged := False;
if DataSetField = nil then begin
if AEnforce or
not MasterSource.DataSet.IsEmpty and
not (MasterSource.DataSet.State in [dsInsert, dsSetKey]) then begin
if fiDetails in FetchOptions.Cache then begin
if not Active or (FSourceView = nil) or ((FSourceView.Rows.Count = 0) and (not HasDeleted)) then //Modified


Thank you Pablo, that is a good work-around for the bug with deleting
all detail records. Unfortunately, that still leaves FireDAC redundantly
re-querying an already empty detail dataset every time its master is
focused (since no deleted detail records will be in the cache)...

...but of course the bigger problem remains that FireDAC really does NOT
fully cache master/detail data, EXCEPT where you can retrieve all the
detail records for all the masters in one hit... which is typically not
the case.

IMHO, a cache where new data is constantly being retrieved into it is
not really a "true cache". This behaviour compromises the integrity of
the cache. A cache should be a complete snapshot of the required data
retrieved within the scope of a single transaction, with a further
single transaction to apply changes at some later point.

After several attempts to apply FireDAC to master/detail caching from
XE5 to Tokyo, I have concluded that it is STILL necessary to use
TClientDataset (which handles master/detail data very well). While you
can certainly use FireDAC to retrieve data through a provider to
TClientDataSet, it does mean:

a) you are using an extra memory dataset every time (being the redundant
FireDAC mem table in addition to TClientDataSet)

b) you lose access to all the otherwise useful features of FireDAC
(especially since it is so much easier to apply and use for
non-master/detail data).

In the greater scheme of things you might argue that this is not so
significant, but it very definitely remains a "dirty" solution. You have
to ask yourself, then, why bother with FireDAC at all? I guess the main
reasons are that:

a) FireDAC is the chosen Embarcadero data connection technology going
forward, and where development resources will be targeted.
TClientDataSet is stagnant technology.

b) FireDAC has a whole world of connectivity capability with a huge
range of configuration options, and lots of other great features.

These are the reasons I optimistically keep revisiting FireDAC in the
hope it may improve on the caching front.

Sadly, I see no indication whatsoever that Embarcadero intend to do
anything about this. Certainly, no amount of griping about it on my part
will achieve much, except periodically allow me to vent some frustration
I suppose ;-)
Legend
Helpful Answer (5 pts)
Correct Answer (10 pts)

Server Response from: ETNAJIVE02