You are indeed quite correct. I've altered the fnGetData() function to be much faster for cases when only a single row is requested (the same thing could be done using the oSettings object). Likewise with fnGetNodes() I've added the option to get a single node based on the index, which is also now nice and fast.
I gave the 4.1 beta a spin a couple of weeks ago displaying data on a unit test page. 10 tables with anywhere from 1 to 1000 rows, (and about 5 to 40 columns), loaded dynamically one after the other. For testing, it was plenty fast under FireFox 3. IE8 Beta had a rough time with _fnGetDataMaster though. So slow, that it prompted to continue a long running script at least a couple of dozen times. From the profile, it looked like the array push() was to blame. Probably just a beta thing. It didn't have a problem with splice() though. The pushes definitely add up quick, millions of pushes the profiles said. Because it gets called for each column, for each row, for each table, it got me wondering if it might be good to cache the aData[][] return value. (If it's worth it considering 40 column tables aren't a typical use case). Does it need to be a shallow copy? Could it be deep using splice() or maybe - could the same cached array be returned on a per row / per table basis?
It's great for dynamic tables, btw. Impressive considering that wasn't really a central design goal. For 5.0, I'd love to be able to get the column name/number rather than just the cell data when determining custom types. I have feeling that's easier said than done but I worry about edge cases where string data looks like a date/number/money but should really be treated as a string.
What if you would turn the data structure around? Now you have an array of objects with tr and data rows, you could have two arrays of just tr and data rows, which you would keep in sync. It is harder to program in this way, but it is easier and way more efficient to return then one of those. And for the programming part you can make some helper functions for accessing those two arrays which would keep them in sync and also abstract the data structure bellow.
OK, it is true that I do not need really need access to the whole array directly through API. Currently I am doing something like this:
var aDataMaster = oTable.fnGetData();
for (var i = 0; i < aDataMaster.length; i++) {
var row = aDataMaster[i];
...
}
And I could change this to:
var length = ...;
for (i = 0; i < length; i++) {
var row = oTable.fnGetData(i);
...
}
So the data structure can be left as it is. And those helper functions I was writing about above can be made as API functions. For example ... there is missing an API function to get the number of rows of the data. (Yes, I could access it directly, but I would like to use API in my code.)
Replies
You are indeed quite correct. I've altered the fnGetData() function to be much faster for cases when only a single row is requested (the same thing could be done using the oSettings object). Likewise with fnGetNodes() I've added the option to get a single node based on the index, which is also now nice and fast.
Thanks for spotting this.
Allan
First and foremost, thank you Allan.
I gave the 4.1 beta a spin a couple of weeks ago displaying data on a unit test page. 10 tables with anywhere from 1 to 1000 rows, (and about 5 to 40 columns), loaded dynamically one after the other. For testing, it was plenty fast under FireFox 3. IE8 Beta had a rough time with _fnGetDataMaster though. So slow, that it prompted to continue a long running script at least a couple of dozen times. From the profile, it looked like the array push() was to blame. Probably just a beta thing. It didn't have a problem with splice() though. The pushes definitely add up quick, millions of pushes the profiles said. Because it gets called for each column, for each row, for each table, it got me wondering if it might be good to cache the aData[][] return value. (If it's worth it considering 40 column tables aren't a typical use case). Does it need to be a shallow copy? Could it be deep using splice() or maybe - could the same cached array be returned on a per row / per table basis?
It's great for dynamic tables, btw. Impressive considering that wasn't really a central design goal. For 5.0, I'd love to be able to get the column name/number rather than just the cell data when determining custom types. I have feeling that's easier said than done but I worry about edge cases where string data looks like a date/number/money but should really be treated as a string.
Thanks again,
James
var aDataMaster = oTable.fnGetData();
for (var i = 0; i < aDataMaster.length; i++) {
var row = aDataMaster[i];
...
}
And I could change this to:
var length = ...;
for (i = 0; i < length; i++) {
var row = oTable.fnGetData(i);
...
}
So the data structure can be left as it is. And those helper functions I was writing about above can be made as API functions. For example ... there is missing an API function to get the number of rows of the data. (Yes, I could access it directly, but I would like to use API in my code.)