Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
127 changes: 120 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ consistency, and intelligent work avoidance.**
- [Understanding the Sliding Window](#-understanding-the-sliding-window)
- [Materialization for Fast Access](#-materialization-for-fast-access)
- [Usage Example](#-usage-example)
- [Boundary Handling & Data Availability](#-boundary-handling--data-availability)
- [Resource Management](#-resource-management)
- [Configuration](#-configuration)
- [Execution Strategy Selection](#-execution-strategy-selection)
Expand Down Expand Up @@ -318,21 +319,132 @@ var cache = WindowCache<int, string, IntegerFixedStepDomain>.Create(
readMode: UserCacheReadMode.Snapshot
);

// Request data - returns ReadOnlyMemory<string>
var data = await cache.GetDataAsync(
// Request data - returns RangeResult<int, string>
var result = await cache.GetDataAsync(
Range.Closed(100, 200),
cancellationToken
);

// Access the data
foreach (var item in data.Span)
foreach (var item in result.Data.Span)
{
Console.WriteLine(item);
}
```

---

## 🎯 Boundary Handling & Data Availability

The cache provides explicit boundary handling through `RangeResult<TRange, TData>` returned by `GetDataAsync()`. This allows data sources to communicate data availability and partial fulfillment.

### RangeResult Structure

```csharp
public sealed record RangeResult<TRange, TData>(
Range<TRange>? Range, // Actual range returned (nullable)
ReadOnlyMemory<TData> Data // The data for that range
);
```

### Basic Usage

```csharp
var result = await cache.GetDataAsync(
Intervals.NET.Factories.Range.Closed(100, 200),
ct
);

// Always check Range before using Data
if (result.Range != null)
{
Console.WriteLine($"Received {result.Data.Length} elements for range {result.Range}");

foreach (var item in result.Data.Span)
{
ProcessItem(item);
}
}
else
{
Console.WriteLine("No data available for requested range");
}
```

### Why RangeResult?

**Benefits:**
- ✅ **Explicit Contracts**: Know exactly what range was fulfilled
- ✅ **Boundary Awareness**: Data sources signal truncation at physical boundaries
- ✅ **No Exceptions for Normal Cases**: Out-of-bounds is expected, not exceptional
- ✅ **Partial Fulfillment**: Handle cases where only part of requested range is available

### Bounded Data Sources Example

For data sources with physical boundaries (databases with min/max IDs, APIs with limits):

```csharp
public class BoundedDatabaseSource : IDataSource<int, Record>
{
private const int MinId = 1000;
private const int MaxId = 9999;

public async Task<RangeChunk<int, Record>> FetchAsync(
Range<int> requested,
CancellationToken ct)
{
var availableRange = Intervals.NET.Factories.Range.Closed(MinId, MaxId);
var fulfillable = requested.Intersect(availableRange);

// No data available
if (fulfillable == null)
{
return new RangeChunk<int, Record>(
null, // Range must be null to signal no data available
Array.Empty<Record>()
);
}

// Fetch available portion
var data = await _db.FetchRecordsAsync(
fulfillable.LowerBound.Value,
fulfillable.UpperBound.Value,
ct
);

return new RangeChunk<int, Record>(fulfillable, data);
}
}

// Example scenarios:
// Request [2000..3000] → Range = [2000..3000], 1001 records ✓
// Request [500..1500] → Range = [1000..1500], 501 records (truncated) ✓
// Request [0..999] → Range = null, empty data ✓
```

### Handling Subset Requests

When requesting a subset of cached data, `RangeResult` returns only the requested range:

```csharp
// Prime cache with large range
await cache.GetDataAsync(Intervals.NET.Factories.Range.Closed(0, 1000), ct);

// Request subset (served from cache)
var subset = await cache.GetDataAsync(
Intervals.NET.Factories.Range.Closed(100, 200),
ct
);

// Result contains ONLY the requested subset
Assert.Equal(101, subset.Data.Length); // [100, 200] = 101 elements
Assert.Equal(subset.Range, Intervals.NET.Factories.Range.Closed(100, 200));
```

**For complete boundary handling documentation, see:** [Boundary Handling Guide](docs/boundary-handling.md)

---

## 🔄 Resource Management

WindowCache manages background processing tasks and resources that require explicit disposal. **Always dispose the cache when done** to prevent resource leaks and ensure graceful shutdown of background operations.
Expand Down Expand Up @@ -406,7 +518,7 @@ public class DataService : IAsyncDisposable
);
}

public ValueTask<ReadOnlyMemory<string>> GetDataAsync(Range<int> range, CancellationToken ct)
public ValueTask<RangeResult<int, string>> GetDataAsync(Range<int> range, CancellationToken ct)
=> _cache.GetDataAsync(range, ct);

public async ValueTask DisposeAsync()
Expand Down Expand Up @@ -757,9 +869,10 @@ see [Diagnostics Guide](docs/diagnostics.md).**

1. **[README - Quick Start](#-quick-start)** - Basic usage examples (you're already here!)
2. **[README - Configuration Guide](#configuration)** - Understand the 5 key parameters
3. **[Storage Strategies](docs/storage-strategies.md)** - Choose Snapshot vs CopyOnRead for your use case
4. **[Glossary - Common Misconceptions](docs/glossary.md#common-misconceptions)** - Avoid common pitfalls
5. **[Diagnostics](docs/diagnostics.md)** - Add optional instrumentation for visibility
3. **[Boundary Handling](docs/boundary-handling.md)** - RangeResult usage, bounded data sources, partial fulfillment
4. **[Storage Strategies](docs/storage-strategies.md)** - Choose Snapshot vs CopyOnRead for your use case
5. **[Glossary - Common Misconceptions](docs/glossary.md#common-misconceptions)** - Avoid common pitfalls
6. **[Diagnostics](docs/diagnostics.md)** - Add optional instrumentation for visibility

**When to use this path**: Building features, integrating the cache, performance tuning.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ private void SetupCache(int? rebalanceQueueCapacity)

// Build initial range for first request
var initialRange = Intervals.NET.Factories.Range.Closed<int>(
InitialStart,
InitialStart,
InitialStart + BaseSpanSize - 1
);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@ public class UserFlowBenchmarks
private Range<int> _partialHitBackwardRange;
private Range<int> _fullMissRange;

private WindowCacheOptions _snapshotOptions;
private WindowCacheOptions _copyOnReadOptions;
private WindowCacheOptions? _snapshotOptions;
private WindowCacheOptions? _copyOnReadOptions;

[GlobalSetup]
public void GlobalSetup()
Expand Down Expand Up @@ -120,13 +120,13 @@ public void IterationSetup()
_snapshotCache = new WindowCache<int, int, IntegerFixedStepDomain>(
_dataSource,
_domain,
_snapshotOptions
_snapshotOptions!
);

_copyOnReadCache = new WindowCache<int, int, IntegerFixedStepDomain>(
_dataSource,
_domain,
_copyOnReadOptions
_copyOnReadOptions!
);

// Prime both caches with known initial window
Expand Down Expand Up @@ -155,15 +155,15 @@ public void IterationCleanup()
public async Task<ReadOnlyMemory<int>> User_FullHit_Snapshot()
{
// No rebalance triggered
return await _snapshotCache!.GetDataAsync(_fullHitRange, CancellationToken.None);
return (await _snapshotCache!.GetDataAsync(_fullHitRange, CancellationToken.None)).Data;
}

[Benchmark]
[BenchmarkCategory("FullHit")]
public async Task<ReadOnlyMemory<int>> User_FullHit_CopyOnRead()
{
// No rebalance triggered
return await _copyOnReadCache!.GetDataAsync(_fullHitRange, CancellationToken.None);
return (await _copyOnReadCache!.GetDataAsync(_fullHitRange, CancellationToken.None)).Data;
}

#endregion
Expand All @@ -175,31 +175,31 @@ public async Task<ReadOnlyMemory<int>> User_FullHit_CopyOnRead()
public async Task<ReadOnlyMemory<int>> User_PartialHit_ForwardShift_Snapshot()
{
// Rebalance triggered, handled in cleanup
return await _snapshotCache!.GetDataAsync(_partialHitForwardRange, CancellationToken.None);
return (await _snapshotCache!.GetDataAsync(_partialHitForwardRange, CancellationToken.None)).Data;
}

[Benchmark]
[BenchmarkCategory("PartialHit")]
public async Task<ReadOnlyMemory<int>> User_PartialHit_ForwardShift_CopyOnRead()
{
// Rebalance triggered, handled in cleanup
return await _copyOnReadCache!.GetDataAsync(_partialHitForwardRange, CancellationToken.None);
return (await _copyOnReadCache!.GetDataAsync(_partialHitForwardRange, CancellationToken.None)).Data;
}

[Benchmark]
[BenchmarkCategory("PartialHit")]
public async Task<ReadOnlyMemory<int>> User_PartialHit_BackwardShift_Snapshot()
{
// Rebalance triggered, handled in cleanup
return await _snapshotCache!.GetDataAsync(_partialHitBackwardRange, CancellationToken.None);
return (await _snapshotCache!.GetDataAsync(_partialHitBackwardRange, CancellationToken.None)).Data;
}

[Benchmark]
[BenchmarkCategory("PartialHit")]
public async Task<ReadOnlyMemory<int>> User_PartialHit_BackwardShift_CopyOnRead()
{
// Rebalance triggered, handled in cleanup
return await _copyOnReadCache!.GetDataAsync(_partialHitBackwardRange, CancellationToken.None);
return (await _copyOnReadCache!.GetDataAsync(_partialHitBackwardRange, CancellationToken.None)).Data;
}

#endregion
Expand All @@ -212,7 +212,7 @@ public async Task<ReadOnlyMemory<int>> User_FullMiss_Snapshot()
{
// No overlap - full cache replacement
// Rebalance triggered, handled in cleanup
return await _snapshotCache!.GetDataAsync(_fullMissRange, CancellationToken.None);
return (await _snapshotCache!.GetDataAsync(_fullMissRange, CancellationToken.None)).Data;
}

[Benchmark]
Expand All @@ -221,7 +221,7 @@ public async Task<ReadOnlyMemory<int>> User_FullMiss_CopyOnRead()
{
// No overlap - full cache replacement
// Rebalance triggered, handled in cleanup
return await _copyOnReadCache!.GetDataAsync(_fullMissRange, CancellationToken.None);
return (await _copyOnReadCache!.GetDataAsync(_fullMissRange, CancellationToken.None)).Data;
}

#endregion
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
using Intervals.NET;
using Intervals.NET.Domain.Default.Numeric;
using Intervals.NET.Domain.Extensions.Fixed;
using SlidingWindowCache.Public;
using SlidingWindowCache.Public.Dto;

Expand Down Expand Up @@ -31,14 +30,14 @@ public SlowDataSource(IntegerFixedStepDomain domain, TimeSpan latency)
/// Fetches data for a single range with simulated latency.
/// Respects cancellation token to allow early exit during debounce or execution cancellation.
/// </summary>
public async Task<IEnumerable<int>> FetchAsync(Range<int> range, CancellationToken cancellationToken)
public async Task<RangeChunk<int, int>> FetchAsync(Range<int> range, CancellationToken cancellationToken)
{
// Simulate I/O latency (network/database delay)
// This delay is cancellable, allowing execution strategies to abort obsolete fetches
await Task.Delay(_latency, cancellationToken).ConfigureAwait(false);

// Generate data after delay completes
return GenerateDataForRange(range);
return new RangeChunk<int, int>(range, GenerateDataForRange(range));
}

/// <summary>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ public SynchronousDataSource(IntegerFixedStepDomain domain)
/// Fetches data for a single range with zero latency.
/// Data generation: Returns the integer value at each position in the range.
/// </summary>
public Task<IEnumerable<int>> FetchAsync(Range<int> range, CancellationToken cancellationToken) =>
Task.FromResult(GenerateDataForRange(range));
public Task<RangeChunk<int, int>> FetchAsync(Range<int> range, CancellationToken cancellationToken) =>
Task.FromResult(new RangeChunk<int, int>(range, GenerateDataForRange(range)));

/// <summary>
/// Fetches data for multiple ranges with zero latency.
Expand Down
6 changes: 4 additions & 2 deletions docs/actors-and-responsibilities.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@ Handles user requests with minimal latency and maximal isolation from background

**Critical Contract:**
```
Every user access produces a rebalance intent containing delivered data.
Every user access that results in assembled data publishes a rebalance intent containing
that delivered data. Requests where IDataSource returns null (physical boundary misses)
do not publish an intent — there is no data to embed (Invariant C.24e).
The UserRequestHandler is READ-ONLY with respect to cache state.
The UserRequestHandler NEVER invokes directly decision logic - it just publishes an intent.
```
Expand Down Expand Up @@ -171,7 +173,7 @@ IntentController (User Thread for PublishIntent; Background Thread for ProcessIn
**Enhanced Role (Decision-Driven Model):**

Now responsible for:
- **Receiving intents** (on every user request) [IntentController.PublishIntent - User Thread]
- **Receiving intents** (when user request produces assembled data) [IntentController.PublishIntent - User Thread]
- **Owning and invoking DecisionEngine** [IntentController - Background Thread (intent processing loop), synchronous]
- **Intent identity and versioning** via ExecutionRequest snapshot [IntentController]
- **Cancellation coordination** based on validation results from owned DecisionEngine [IntentController - Background Thread]
Expand Down
Loading