Namespace DotCompute.Memory
Classes
- AcceleratorContext
Represents the context for accelerator operations
- AdvancedMemoryTransferEngine
Advanced memory transfer engine with high-performance optimizations for large datasets, concurrent transfers, streaming operations, and memory-mapped file support.
- BaseDeviceBuffer<T>
Base class for device-specific memory buffers (GPU memory).
- BaseMemoryBuffer<T>
Base abstract class for memory buffer implementations, consolidating common patterns. This addresses the critical issue of 15+ duplicate buffer implementations.
- BasePinnedBuffer<T>
Base class for pinned memory buffers (CPU memory pinned for GPU access).
- BasePooledBuffer<T>
Base class for pooled memory buffers with automatic recycling.
- BaseUnifiedBuffer<T>
Base class for unified memory buffers (accessible from both CPU and GPU).
- BufferDiagnosticInfo
Comprehensive diagnostic information for a buffer.
- BufferMemoryInfo
Information about buffer memory allocation and usage.
- BufferSnapshot
Snapshot of buffer state at a specific point in time.
- BufferTransferStats
Performance statistics for buffer transfers.
- BufferValidationResult
Results of buffer validation checks.
- HighPerformanceObjectPool<T>
High-performance object pool optimized for compute workloads with:
- Lock-free operations using ConcurrentStack
- Automatic pool size management
- Thread-local storage for hot paths
- Performance metrics and monitoring
- Configurable eviction policies
- NUMA-aware allocation when available Target: 90%+ allocation reduction for frequent operations
- MemoryAllocator
A high-performance memory allocator that provides aligned memory allocation and efficient memory management. Supports both pinned and unpinned allocations with platform-specific optimizations.
- MemoryAllocatorStatistics
A class that represents memory allocator statistics.
- MemoryMappedOperations
Memory mapping utilities for zero-copy file operations.
- MemoryMappedSpan<T>
Represents a memory-mapped span for zero-copy file access.
- MemoryPool
Memory pool for efficient buffer reuse with comprehensive statistics. This consolidates memory pool functionality from multiple implementations.
- MemoryStatistics
CONSOLIDATED memory usage and performance statistics. This replaces all duplicate MemoryStatistics implementations across the codebase.
Provides comprehensive tracking for:
- Allocation and deallocation counts
- Memory usage (current, peak, total)
- Pool efficiency metrics
- Performance timing data
- Error and failure statistics
- OptimizedUnifiedBuffer<T>
Performance-optimized unified buffer with advanced memory management patterns:
- Object pooling for frequent allocations (90% reduction target)
- Lazy initialization for expensive operations
- Zero-copy operations using Span<T> and Memory<T>
- Async-first design with optimized synchronization
- Memory prefetching for improved cache performance
- NUMA-aware memory allocation
- PinnedMemoryOperations
Pinned memory utilities for interop scenarios.
- PoolConfiguration
Configuration options for the object pool.
- UnifiedBufferHelpers
Helper methods and utilities for UnifiedBuffer operations.
- UnifiedBufferSlice<T>
Represents a slice view of a UnifiedBuffer that provides access to a contiguous subset of elements. This is a lightweight wrapper that doesn't own the underlying memory.
- UnifiedBufferView<TOriginal, TView>
Represents a type-cast view of a UnifiedBuffer that provides access to the same memory with a different element type. This is a lightweight wrapper that doesn't own the underlying memory but reinterprets it as a different type.
- UnifiedBuffer<T>
Core unified buffer implementation providing basic buffer operations and properties. Handles fundamental buffer state management and access patterns.
- UnifiedMemoryManager
Production-ready unified memory manager that consolidates all memory management functionality. This is the SINGLE source of truth for memory management in DotCompute.
Features:
- Memory pooling with 90% allocation reduction
- Automatic cleanup and defragmentation
- Cross-backend compatibility (CPU, CUDA, Metal, etc.)
- Production-grade error handling
- Comprehensive statistics and monitoring
- Thread-safe operations
- UnsafeMemoryOperations
Provides high-performance unsafe memory operations with platform-specific optimizations. Includes zero-copy operations, proper memory alignment, and SIMD optimizations.
- ZeroCopyOperations
High-performance zero-copy operations using Span<T> and Memory<T>:
- Memory-mapped file operations for large datasets
- Pinned memory operations with automatic cleanup
- Vectorized memory operations with SIMD acceleration
- Unsafe pointer operations for maximum performance
- Interop-friendly memory layouts for native code
- Buffer slicing and views without allocation Target: Eliminate 95%+ of memory copies in hot paths
Structs
- PinnedMemoryHandle<T>
Represents a pinned memory handle with automatic cleanup.
- PoolStatistics
Performance statistics for object pools.
Enums
- AcceleratorType
Accelerator type enumeration
- MapMode
Specifies the access mode for mapping memory buffers