I feel like we're still not communicating.
WHAT HAPPENS NOW
Joe has a svn project,
joescript. He has a data file in that project,
svn/joescript/data/joescript.txt. When his script is installed via svn, the "default" version of that data file gets copied to
$ROOT/data/joescript.txt. His script routinely writes to the data file in
$ROOT/data.
Today, he commits an update to his repository's version of
data/joescript.txt. When clients update, this new "default" data file gets copied to
$ROOT/data/joescript.txt, overwriting it entirely.
WHAT WOULD HAPPEN
Joe's script routinely writes to his data file. But now, there is only one data file -
svn/joescript/data/joescript.txt. Script writes to the data file are made to the
svn working copy of the file.
Today, he commits an update to his repository's version of
joescript/data/joescript.txt. Clients update, and all of them try to merge changes from the repo into their local working copy file. Somewhere between one and all of the merge operations fail, and now you have a ton of users complaining "why my svn broke, I didn't do anything to it."
Why are data files handled differently?
Scripts procedurally write to/ update their data files. Scripts don't procedurally write into themselves. The only time you will ever have a merge operation on a file in
scripts/ (or relay, images, etc) is when the user manually modifies it, which is fine - such a user can presumably deal with conflicts.
This is a corner case (how many authors commit updates to their svn project's data files, anyway) but it is a serious issue nonetheless. Loss of local data is fine - a persistently broken script due to a failed merge that was
in no way the user's fault is not fine. Merges are not something that people should ever encounter unless they manually alter their working copy files.