I already outlined your options above, and yes, 6-9 secs for a 1000 collection of users is actually relatively "fast" 😉... mine can take much longer per 1000 items...
As I mentioned above, I'm not aware of any "faster", otherwise I would use it... you can get a (paid) DB dump which is much faster to extract data once you figure out the table structure, but yes, it's a snapshot and you have to pay for each snapshot...so for me that was never really a viable option.
I also mentioned that you can run concurrent requests, e.g. send out multiple requests in parallel, let's say 10 at a time that all would ask for 1000 items, that speeds things up a bit, but you'll run into 429 (too many requests) rate limits or worse, so you should implement a good retry mechanism for those requests.
And no, of course there is no "mechanism in place" (where do you have that?) to resume anything if your requests fail, that is up to you to implement as a developer 😉... you could simply do that with some kind of Index you maintain on your side (where you export the data), but there are many other ways of course.
EDIT: I might add that with batch queries (which you seem already familiar with based on the screenshots above) you can technically do 50 requests "in one", meaning the batch query mechanism allows you to do a maximum of 50 requests "bundled" into one, so you could create "batches" of 50 x 1000 users and send that off to the API endpoint adjusting limit and offset for each bundled query like you did above, just be aware of that 50 max limit. Unfortunately this seems only theoretical, I have tried this, e.g. fetching 50k user ID's like this and you will run into a "hard" timeout from Khoros API server at 3 minutes, and for all instances I have tried I never got back a successful response with the first attempt...the Khoros API seems to do some caching in the background, so for example if you try and get 10k users at a time, the first run can take easily 90 seconds, but then if you repeat that call a few times, it can return as fast as in 7 seconds...problem is of course that usually we'd like things to a) succeed the first time and b) don't make the same request a second time...
So probably a hand-optimized strategy of batch-queries and concurrent requests might be the most efficient way to go about this, e.g. try to make it so the total response time for a batch query is not too long (I'd keep it sub 60 seconds if possible) and then balance concurrent batch query requests so you do not hit the 60 requests per 60 seconds limit of the API...I personally do not go over 10k in one batch (that's for id's only...) and lower if I need more fields as responses are gonna get slower and slower the more fields you need. Another problem you might face is that responses get slower the higher your offset is for some reason, so at first it can seem fine to do 25k at once, but as you move on you'll get a ton of 504 gateway timeout errors as your offset increases...