Which option is faster...

jicman cabrera_ at _wrc.xerox.com
Tue Aug 6 05:32:11 PDT 2013


On Tuesday, 6 August 2013 at 04:10:57 UTC, Andre Artus wrote:
> On Monday, 5 August 2013 at 13:59:24 UTC, jicman wrote:
>>
>> Greetings!
>>
>> I have this code,
>>
>> foreach (...)
>> {
>>
>>  if (std.string.tolower(fext[0]) == "doc" ||
>>    std.string.tolower(fext[0]) == "docx" ||
>>    std.string.tolower(fext[0]) == "xls" ||
>>    std.string.tolower(fext[0]) == "xlsx" ||
>>    std.string.tolower(fext[0]) == "ppt" ||
>>    std.string.tolower(fext[0]) == "pptx")
>>   continue;
>> }
>>
>> foreach (...)
>> {
>>  if (std.string.tolower(fext[0]) == "doc")
>>    continue;
>>  if (std.string.tolower(fext[0]) == "docx")
>>    continue;
>>  if (std.string.tolower(fext[0]) == "xls")
>>    continue;
>>  if (std.string.tolower(fext[0]) == "xlsx")
>>    continue;
>>  if (std.string.tolower(fext[0]) == "ppt")
>>    continue;
>>  if (std.string.tolower(fext[0]) == "pptx")
>>   continue;
>>  ...
>>  ...
>> }
>>
>> thanks.
>>
>> josé
>
> What exactly are you trying to do with this? I get the 
> impression that there is an attempt at "local optimization" 
> when broader approach could lead to better results.
>
> For instance. Using the OS's facilities to filter (six 
> requests, one each for "*.doc", "*.docx") could actually end up 
> being a lot faster.
>
> If you could give more detail about what you are trying to 
> achieve then it could be possible to get better results.

The files are in a network drive and doing a list foreach *.doc, 
*.docx, etc. will be more expensive than getting the list of all 
the files at once and then processing them accordingly.


More information about the Digitalmars-d-learn mailing list