Editor doesn't respect default values when importing CSV files
Editor doesn't respect default values when importing CSV files
Hi.
https://editor.datatables.net/examples/extensions/import.html
I have implemented CSV import copying from this example but there is this; after the CSV file is parsed, it loads the data into a multiple creation editor window. But that form doesn't respect the default values of the fields. For example when you click "New" button, it fills some fields with default values. But with CSV import, the same form doesn't display default.
How can I resolve this? (I am using Editor 2.0.8 if it is relevant.)
This question has an accepted answers - jump to answer
Answers
When you do the import, each field is given a value, so a default would never apply, unless I'm misunderstanding?
Allan
Right, that's actually something didn't feel right when implementing it. Because not all the fields have to be required. Usually, importing should allow leaving some columns un-mapped (i.e. empty). But this implementation doesn't have it and sorry I forgot to mention that I worked around by leaving an empty column in the CSV file and map the non-required fields to this empty column.
This way I expected the default value could take over but now I come to realize that if importing forces to map each field, then it will never look for the default in the first place. But I think this is not right. Any way to work around this?
I think the only viable option right now is to make columns for those columns with default values copied all over.
Here is another thing with importing CSV files: It doesn't ignore hidden fields which means it asks you to map CSV fields to hidden fields which in the end defeats the purpose of a field being "hidden".
I tried filtering the field type by
field.input().attr('type')
but it didn't work. It returns "text" for text fields and undefined for all other types (select, checkbox, hidden etc.). As a temporary work-around, I am allowing the field to be shown only if they don't match the hidden fields' names.By the way, I can't comment on the Editor reference articles. It produces HTTP 500 error which could be seen in the console. I was to comment on
hidden
reference. There is a possibility of a flaw that should be noted when using hidden fields. The reference page gives an example setting the value of the hidden field on client-side; which means that anyone could alter it even if it's hidden. I think developers should be warned about it.Anyone?
Sorry - I'm struggling to keep up with the volume of support requests at the moment and I'm missing some now and then.
On create, if you don't write a value to a field, it will take the default value. If it isn't, then it suggests something is being written to the field. Can you link to the page you are working on perhaps?
Doh - thank you. Added to the list .
Yes - hidden fields most certainly shouldn't be used for security. Anything from the client-side should be inherently untrusted. I'll add a note to the docs saying that it is trivial for someone to modify values in the hidden fields (or any field).
Allan
On create, that's correct; but the question here is about importing. As you stated, importing doesn't respect default values because it assumes that you have a column in the CSV file for each column on the table. But this assumption is not right I think. For example my table could have 10 columns but the CSV file could have only 4. Nonetheless, the importing process maps the remaining 6 columns by rewinding the columns in the CSV file. (Like each column in the CSV file is mapped to the table columns twice.) There is no option to leave a table column un-mapped during import, not even the hidden ones.
An "import" is just a create - on line 28 in this example, you'll see that it calls
create()
. It then loops over all of the fields and sets the values. If you want it to be able to take a default value it should perhaps not loop over the fields, but rather the values selected for the mapping, getting each field as it goes, and ignoring those which are perhaps left as blank or some other "unselected" value.I think it should be quite possible to do what you want, but it will require a couple of tweaks from the code in the example.
Allan
Ok, here is what I did.
I added very little tweaks to match the column names with the label names of the table fields. This way the matching fields will map automatically. I also added hard-coded fields which I don't want to show up while mapping. (Hard-coded because as I said before,
field.input().attr('type')
returnsundefined
for fields other than of typetext
.)I also added an empty column to the Papaparse result by modifying it in
complete
callback, before callingselectColumns
editor window, I added this:Now it will automatically map this empty column to the fields which wasn't auto-detected and which you don't map manually.
Of course this will still ignore the default values defined in the main editor as you explained before. I think ignoring the fields which you want to make have the default values should work just like how it works with hidden fields. (Like checking if the field has a default value and check if there is a column for that field and use it, and if there is not a column mapped, pass the default value of that field to the
selectEditor
). At this point I didn't try this as the above tweaks were enough for me for now.Caution: This tweak to add an empty column on-the-fly rather than actually having it in the CSV file will impact your performance if the CSV files you are working with are big. In that case, instead of this tweak, one should consider adding an actual empty column to the CSV file prior to importing. There is also an option to use streaming feature of Papaparse with
step
callback; but that will require reconstructing the whole result set which the author of Papaparse doesn't recommend as it will defeat the purpose of streaming and will cause your browser to use more RAM.Forget about the previous method I posted, I don't know why I didn't think of this before. Here is a more "correct" and sleek method without any performance concern:
Copying from the example code, replace the first two blocks of the
selectColumns()
function definition like:Find this line:
Replace it like:
Now with this modification;
1- CSV columns will be automatically mapped to the Editor fields if the column names in both match as strings.
2- It won't iterate over the CSV fields for un-mapped fields but instead will leave them empty if you didn't map or it wasn't automatically mapped.
3- It will automatically enter the default value for a field if you didn't map it when importing (i.e. you left that field empty).
4- It will ignore hidden fields of the Editor, keeping them exempt from mapping.
5- It won't iterate over the CSV rows and won't cause a CPU and RAM overhead.
P.S: This method won't impact your importing process but if your files are really big, with or without this modification, the process will be slow and resource consuming. So disregarding whether you used this method I am proposing or not, you should consider streaming feature of Papaparse in this case. If you do, and if you use this method I am posting now, you won't have the problems I mentioned in my previous post.
Very nice - thank you for posting this! Great to hear you've got a solution that works for you.
Allan
Any opinions as to why
field.input().attr('type')
returns undefined Allan? It works as expected only fortext
fields.Yes, because
input()
gets theinput
element wrapped in a jQuery object, so.attr('type')
is getting thetype
attribute from theinput
element.<select>
,<textarea>
and other inputs don't have such an attribute. Thepassword
field type will returnpassword
for that call, but I would quite expect it not to return something for all fields.There isn't currently an API to determine what Editor field type a field is I'm afraid. I haven't encountered a need for that before, but can certainly look at adding it if it is needed.
Allan
But isn't "hidden" type actually an
input
element? Why would that return undefined?It is, but it isn't ever inserted into the DOM, so I didn't add the type attribute to it - it wasn't needed! It is trivial to add, but it would be of limited use I think, since many of the other field types don't support it. Perhaps a
field().type()
method would be useful? I'm not sure of many other cases it might be used, but perhaps for common tasks on a specific field type?Allan
Agreed. It might not find a wide usage ground so I can't insist for a feature just for myself. Nevertheless, it would be good to have a
field().type()
in the long run, albeit not immediately.That's it committed and it will be in the next release
Allan