How to construct orthogonal "start_date" data structure.
How to construct orthogonal "start_date" data structure.

This "start_date" data structure looks useful:
https://datatables.net/manual/data/orthogonal-data#Predefined-values
but I can't figure out how to construct that JSON structure when retrieving data from my database. (MySQL with PHP).
Any suggestions welcome.
EDIT: should have mentioned using DT Editor.
This question has an accepted answers - jump to answer
This discussion has been closed.
Answers
Hey tangerine,
this should work (hopefully):
Since you have helped me generously on multiple occasions: Here is some code that does aliasing of data base columns extensively - and it works! This is about preparing a log for display. The log values are written as a json-string and saved in the log table as per @allan's blog here:
https://datatables.net/blog/2015-10-02#Logging
My challenge was:
- How to read one single field that contains json_encoded values and prepare its content for display as the original (multiple) individual fields?
- How to do this if the log format has changed over time?
- How to do this if two different log formats are used because once the log is written based on DT Editor and once based on proprietary SQL data manipulations?
- How to apply "SELECT DISTINCT" on the result data?
- How to order the result in a way that will be suitable for excel exporting while using a different sort order at the front end display?
This is all being addressed in this code:
That is some epic code @rf1234 - thanks for sharing!
Allan
Wow! @rf1234, now that's what I call a response! Much appreciated.
This will take me a while to digest. Meanwhile - well, wow!
Thanks guys! Always a pleasure for me to help because I know what you are doing for the DT community. Outstanding.
I didn't like doing this for each and every field:
But it turned out to be more efficient than the solution below that avoids doing this for every field: It simply ran a lot faster than the code below (shortened).
The use of indexes in the log table was of critical importance for performance! Don't hesitate to index pretty much every field (except the "values" field that contains the JSON). It (theoretically) slows down log writing but it certainly helps a lot reading the log. (I deleted all of the where clauses and joins in the code examples. Hence it isn't quite obvious why indexes are really required ... but they are ...)