fix: file browser shows newest files first, bump limit to 5000#2856
fix: file browser shows newest files first, bump limit to 5000#2856JustHereToHelp wants to merge 1 commit intowavetermdev:mainfrom
Conversation
Sort directory entries by mtime descending before truncation so the most recent files always appear when a directory exceeds the limit. Bumped MaxDirSize from 1024 to 5000. Uses DirEntry.Info() instead of os.Stat() to avoid extra syscalls. Fixes wavetermdev#2830
WalkthroughThe changes implement sorting of file directory entries by modification time in descending order and increase the maximum directory size limit. Specifically, the sort package is imported and sorting logic is added to process directory entries from newest to oldest before applying byte-range filtering and chunking. Additionally, the MaxDirSize constant is increased from 1024 to 5000 entries, allowing the system to handle larger directories. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| iInfo, iErr := innerFilesEntries[i].Info() | ||
| jInfo, jErr := innerFilesEntries[j].Info() | ||
| if iErr != nil { | ||
| return false |
There was a problem hiding this comment.
WARNING: Inconsistent error handling in sort comparator
When iErr != nil, the function returns false (treating i as "less than" j), but when jErr != nil, it returns true (treating j as "greater than" i). This asymmetric handling means entries with stat errors will be sorted to the beginning of the list.
Consider handling both error cases consistently, such as:
- Return
falsefor both error cases (keep original order) - Or skip entries with errors entirely before sorting
| return fmt.Errorf("cannot open dir %q: %w", path, err) | ||
| } | ||
| sort.Slice(innerFilesEntries, func(i, j int) bool { | ||
| iInfo, iErr := innerFilesEntries[i].Info() |
There was a problem hiding this comment.
WARNING: Performance concern with sorting large directories
Calling .Info() on every entry during each comparison can be expensive. With the new MaxDirSize of 5000, sorting could trigger up to ~10,000 system calls (each comparison calls Info() twice, and sorting requires O(n log n) comparisons).
Consider caching the FileInfo results before sorting:
type entryWithInfo struct {
entry os.DirEntry
info fs.FileInfo
err error
}
// Pre-fetch all Info() calls
entriesWithInfo := make([]entryWithInfo, len(innerFilesEntries))
for i, entry := range innerFilesEntries {
info, err := entry.Info()
entriesWithInfo[i] = entryWithInfo{entry, info, err}
}
// Sort using cached info
sort.Slice(entriesWithInfo, func(i, j int) bool {
if entriesWithInfo[i].err != nil || entriesWithInfo[j].err != nil {
return false
}
return entriesWithInfo[i].info.ModTime().After(entriesWithInfo[j].info.ModTime())
})
// Extract sorted entries
for i, ewi := range entriesWithInfo {
innerFilesEntries[i] = ewi.entry
}
Code Review SummaryStatus: 2 Issues Found | Recommendation: Address before merge Overview
Issue Details (click to expand)WARNING
Files Reviewed (2 files)
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
pkg/wshrpc/wshremote/wshremote_file.go (1)
61-71: Consider pre-computing FileInfo to avoid repeated calls during sorting.The current implementation calls
Info()on eachDirEntryduring every comparison. Whileos.ReadDirentries cache theFileInfo(no extra syscalls), this still invokes the method O(n log n) times during sorting, plus once more per entry in the loop at line 92.A cleaner approach would pre-fetch the modification times once, then sort based on the cached values.
♻️ Suggested optimization
+ type entryWithMtime struct { + entry os.DirEntry + modTime time.Time + err error + } + entriesWithMtime := make([]entryWithMtime, len(innerFilesEntries)) + for i, e := range innerFilesEntries { + info, err := e.Info() + if err != nil { + entriesWithMtime[i] = entryWithMtime{entry: e, err: err} + } else { + entriesWithMtime[i] = entryWithMtime{entry: e, modTime: info.ModTime()} + } + } + sort.Slice(entriesWithMtime, func(i, j int) bool { + if entriesWithMtime[i].err != nil { + return false + } + if entriesWithMtime[j].err != nil { + return true + } + return entriesWithMtime[i].modTime.After(entriesWithMtime[j].modTime) + }) + innerFilesEntries = innerFilesEntries[:0] + for _, e := range entriesWithMtime { + innerFilesEntries = append(innerFilesEntries, e.entry) + } - sort.Slice(innerFilesEntries, func(i, j int) bool { - iInfo, iErr := innerFilesEntries[i].Info() - jInfo, jErr := innerFilesEntries[j].Info() - if iErr != nil { - return false - } - if jErr != nil { - return true - } - return iInfo.ModTime().After(jInfo.ModTime()) - })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/wshrpc/wshremote/wshremote_file.go` around lines 61 - 71, The sort comparator calls DirEntry.Info() repeatedly; precompute and cache each entry's FileInfo/ModTime once before calling sort.Slice to avoid O(n log n) Info() calls: build a slice or map (keyed by index) of FileInfo or modTimes for each entry in innerFilesEntries (using the same Info() error handling), then call sort.Slice using those cached modTimes (and cached error results) for comparisons, and update the later loop that currently calls entry.Info() again to use the cached FileInfo from your cache instead; ensure you preserve the original error handling semantics for entries with Info() errors when sorting and iterating.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pkg/wshrpc/wshremote/wshremote_file.go`:
- Around line 61-71: The sort comparator calls DirEntry.Info() repeatedly;
precompute and cache each entry's FileInfo/ModTime once before calling
sort.Slice to avoid O(n log n) Info() calls: build a slice or map (keyed by
index) of FileInfo or modTimes for each entry in innerFilesEntries (using the
same Info() error handling), then call sort.Slice using those cached modTimes
(and cached error results) for comparisons, and update the later loop that
currently calls entry.Info() again to use the cached FileInfo from your cache
instead; ensure you preserve the original error handling semantics for entries
with Info() errors when sorting and iterating.
Fixes #2830
The file browser caps at 1024 entries with no indication anything's missing. My screenshots folder has ~1300 files and I didn't even notice half were gone until I went looking for something recent.
Bumped the limit to 5000 and added mtime sort before truncation so if you do hit the cap, at least your newest files show up. Uses
DirEntry.Info()instead ofos.Stat()to keep it fast.