Hash :
8469debb
Author :
Date :
2024-09-04T20:04:10
Vulkan: Add dual slots in CompressAndStorePipelineCacheVk() Change fixes following problem: Currently, each call to `CompressAndStorePipelineCacheVk()` stores chunks in order, starting from 0. This overrides previously stored chunks. In case of app termination (kill) in the middle of this process, the entire cache data will be corrupted, since it will partially contain chunks from the new and old caches. Solution: In order to fix this problem, this change introduces `slotIndex` into the `chunkCacheHash` calculation. Slot index is managed by `vk::Renderer::getNextPipelineCacheBlobCacheSlotIndex()` method, which will alternate between 0 and 1 when "useDualPipelineBlobCacheSlots" feature is enabled, and always 0 otherwise. Additionally, chunk storing order is reversed: last chunk is stored first and the first (0 chunk) - last. This is done because 0 chunk is the first that is loaded in `GetAndDecompressPipelineCacheVk()` and used as indication that there is data in the cache. Writing it last, ensures that other chunks will be also available. When "useDualPipelineBlobCacheSlots" is enabled, each call to `CompressAndStorePipelineCacheVk()` will use slot index opposed to the slot that is stored in the cache, avoiding damaging existing data. After writing all chunks for a brief moment there may be 2 instances of the data. However, data for the previous slot will be immediately erased (by writing 1/0-sized blobs) starting from the 0 chunk. To control if erasing of old pipeline cache data will be erased by using 0-sized or 1-sized blobs blobs, added `useEmptyBlobsToEraseOldPipelineCacheFromBlobCache` feature. The `GetAndDecompressPipelineCacheVk()` function will iterate over each available slot index checking only 0 chunk until data is found. In case of the OpenCL API, features will always have following values: - "useDualPipelineBlobCacheSlots" -> false - "useEmptyBlobsToEraseOldPipelineCacheFromBlobCache" -> true Note: this solution requires 2X pipeline cache size space in the blob cache to work as expected, otherwise it will exacerbate other problem: When blob cache is full, but still allows to store the current pipeline cache data, storing next chunk may trigger eviction of already stored items. Depending on the blob cache implementation, eviction process may choose to evict chunks from the current pipeline cache data. As the result: blob cache will not contain all chunks. The above problem will be addressed in the follow up CL. Bug: angleproject:4722 Change-Id: I2920bc3d89263280cdfe0466446fca26415e2b25 Reviewed-on: https://chromium-review.googlesource.com/c/angle/angle/+/5756576 Reviewed-by: Charlie Lao <cclao@google.com> Reviewed-by: Shahbaz Youssefi <syoussefi@chromium.org> Commit-Queue: Igor Nazarov <i.nazarov@samsung.com>
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103
//
// Copyright 2014 The ANGLE Project Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//
#ifndef COMMON_MEMORYBUFFER_H_
#define COMMON_MEMORYBUFFER_H_
#include "common/Optional.h"
#include "common/angleutils.h"
#include "common/debug.h"
#include <stdint.h>
#include <cstddef>
namespace angle
{
class MemoryBuffer final : NonCopyable
{
public:
MemoryBuffer() = default;
~MemoryBuffer();
MemoryBuffer(MemoryBuffer &&other);
MemoryBuffer &operator=(MemoryBuffer &&other);
// On success, size will be equal to capacity.
[[nodiscard]] bool resize(size_t size);
// Sets size bound by capacity.
void setSize(size_t size)
{
ASSERT(size <= mCapacity);
mSize = size;
}
void setSizeToCapacity() { mSize = mCapacity; }
void clear() { (void)resize(0); }
size_t size() const { return mSize; }
size_t capacity() const { return mCapacity; }
bool empty() const { return mSize == 0; }
const uint8_t *data() const { return mData; }
uint8_t *data()
{
ASSERT(mData);
return mData;
}
uint8_t &operator[](size_t pos)
{
ASSERT(pos < mSize);
return mData[pos];
}
const uint8_t &operator[](size_t pos) const
{
ASSERT(pos < mSize);
return mData[pos];
}
void fill(uint8_t datum);
private:
size_t mSize = 0;
size_t mCapacity = 0;
uint8_t *mData = nullptr;
};
class ScratchBuffer final : NonCopyable
{
public:
ScratchBuffer();
// If we request a scratch buffer requesting a smaller size this many times, release and
// recreate the scratch buffer. This ensures we don't have a degenerate case where we are stuck
// hogging memory.
ScratchBuffer(uint32_t lifetime);
~ScratchBuffer();
ScratchBuffer(ScratchBuffer &&other);
ScratchBuffer &operator=(ScratchBuffer &&other);
// Returns true with a memory buffer of the requested size, or false on failure.
bool get(size_t requestedSize, MemoryBuffer **memoryBufferOut);
// Same as get, but ensures new values are initialized to a fixed constant.
bool getInitialized(size_t requestedSize, MemoryBuffer **memoryBufferOut, uint8_t initValue);
// Ticks the release counter for the scratch buffer. Also done implicitly in get().
void tick();
void clear();
private:
bool getImpl(size_t requestedSize, MemoryBuffer **memoryBufferOut, Optional<uint8_t> initValue);
uint32_t mLifetime;
uint32_t mResetCounter;
MemoryBuffer mScratchMemory;
};
} // namespace angle
#endif // COMMON_MEMORYBUFFER_H_