r18678
I moved the update all to a thread pool and made the pool size a preference (svnThreadPoolSize). The default value of 2 actually seems to run faster on my system than the old way. That's probably because update all is reusing threads rather than creating a whole bunch and then destroying them. I decided there was no reason to introduce additional delay, but if there is a reason to do so it is a simple change to the ExecutorService.
The recommendation from SourceForge was to use svn: instead of https: as the connection protocol because it makes fewer connections. However there are comments in the SVN code that imply changing the protocol is not going to work, even with the switch command. So the only safe way, AFAIK, to change protocol is to delete the local copy and then grab a new copy using the svn: protocol.
So I think we need to gradually migrate published URLs to use the svn protocol but the user will have to decide to delete and update. At some point in the future support for automation can be considered but it might not be needed.
Needless to say, if SourceForge is complaining about too many connections or something similar that is probably worthy of further investigation. I think one error updating a script will not stop the update all from trying the others, but I may be wrong and need to reconsider implementation.
But as poorly trained customer service reps who don't want to deal with your problem say "It is working for me"