Creating a memory allocator
Before you can create buffers in memory, you have to request (allocate) some memory first.
It turns out allocating memory
efficiently and dynamically is challenging. Luckily, in vulkano, we have several kinds of memory
allocators that we can pick from depending on our use case. Since we don't have any special needs,
we can use the StandardMemoryAllocator
with default settings, that kind of allocator is general-purpose and will be your go-to option in
most cases.
#![allow(unused)] fn main() { use vulkano::memory::allocator::StandardMemoryAllocator; let memory_allocator = Arc::new(StandardMemoryAllocator::new_default(device.clone())); }
Since device
is actually an Arc<Device>
, the call to .clone()
only clones the Arc
which isn't expensive. You should get used to passing the device as a parameter, as you will
need to do so for most of the Vulkan objects that you create. We encapsulate the memory allocator
with an atomic reference counter since Buffer::from_data
requires an Arc
.
Creating a buffer
When using Vulkan, you will very often need the GPU to read or write data in memory. In fact there isn't much point in using the GPU otherwise, as there is nothing you can do with the results of its work except write them to memory.
In order for the GPU to be able to access some data (either for reading, writing or both), we first need to create a buffer object and put the data in it.
Memory type filter
A Vulkan implementation might (and most often does) have multiple memory types, each being best suited to certain tasks. There are many possible arrangements of memory types a Vulkan implementation might have, and picking the right one is important to ensure most optimal performance.
When allocating memory for a buffer in vulkano, you have to provide a memory type filter,
which tells the memory allocator which memory types it should prefer, and which ones it should
avoid, when picking the right one. For example, if you want to continuously upload data to the GPU,
you should use MemoryTypeFilter::PREFER_DEVICE | MemoryTypeFilter::HOST_SEQUENTIAL_WRITE
; on the
other hand, if you have some data that will largely remain visible only to the GPU, using
MemoryTypeFilter::PREFER_DEVICE
brings increased performance at the cost of more complicated
data access from the CPU. For staging buffers, you should use
MemoryTypeFilter::PREFER_HOST | MemoryTypeFilter::HOST_SEQUENTIAL_WRITE
.
The simplest way to create a buffer is to create it in CPU-accessible memory, by using
MemoryTypeFilter::HOST_SEQUENTIAL_WRITE
or MemoryTypeFilter::HOST_RANDOM_ACCESS
, together with
one of the other filters depending of whether host or device-local memory is preferred.
#![allow(unused)] fn main() { use vulkano::buffer::{Buffer, BufferCreateInfo, BufferUsage}; use vulkano::memory::allocator::{AllocationCreateInfo, MemoryTypeFilter}; let data: i32 = 12; let buffer = Buffer::from_data( memory_allocator.clone(), BufferCreateInfo { usage: BufferUsage::UNIFORM_BUFFER, ..Default::default() }, AllocationCreateInfo { memory_type_filter: MemoryTypeFilter::PREFER_DEVICE | MemoryTypeFilter::HOST_SEQUENTIAL_WRITE, ..Default::default() }, data, ) .expect("failed to create buffer"); }
We have to indicate several things when creating the buffer. The first parameter is an Arc
of the
memory allocator to use.
The second parameter is the create info for the buffer. The only field that you have to override is the usage for which we are creating the buffer for, which can help the implementation perform some optimizations. Trying to use a buffer in a way that wasn't indicated when creating it will result in an error. For the sake of the example, we just create a buffer that supports being used as a uniform buffer.
The third parameter is the create info for the allocation. The field of interest is the memory
type filter.
When creating a CPU-accessible buffer, you will most commonly use
MemoryTypeFilter::PREFER_HOST | MemoryTypeFilter::HOST_SEQUENTIAL_WRITE
, but in cases
where the application is writing data through this buffer continuously, using
MemoryTypeFilter::PREFER_HOST | MemoryTypeFilter::HOST_RANDOM_ACCESS
is preferred as it may
yield some performance gain. Using MemoryTypeFilter::PREFER_DEVICE
will get you a buffer that
is inaccessible from the CPU when such a memory type exists. Therefore, you can't use this memory
usage together with Buffer::from_data
directly, and instead have to create a staging buffer
whose content is then copied to the device-local buffer.
Finally, the fourth parameter is the content of the buffer. Here we create a buffer that contains
a single integer with the value 12
.
Note: In a real application you shouldn't create buffers with only 4 bytes of data. Although buffers aren't expensive, you should try to group as much related data as you can in the same buffer.
From_data and from_iter
In the example above we create a buffer that contains the value 12
, which is of type i32
,
but you can put any type you want in a buffer, there is no restriction. In order to give our
arbitrary types a representation that can be used in a generic way, we use the crate bytemuck
and its "plain old data" trait, AnyBitPattern
. Thus, any crate which exposes types with
bytemuck
support can be used in a buffer. You can also derive AnyBitPattern
for you own types,
or use the vulkano-provided BufferContents
derive macro:
#![allow(unused)] fn main() { use vulkano::buffer::BufferContents; #[derive(BufferContents)] #[repr(C)] struct MyStruct { a: u32, b: u32, } let data = MyStruct { a: 5, b: 69 }; let buffer = Buffer::from_data( memory_allocator.clone(), BufferCreateInfo { usage: BufferUsage::UNIFORM_BUFFER, ..Default::default() }, AllocationCreateInfo { memory_type_filter: MemoryTypeFilter::PREFER_DEVICE | MemoryTypeFilter::HOST_SEQUENTIAL_WRITE, ..Default::default() }, data, ) .unwrap(); }
While it is sometimes useful to use a buffer that contains a single struct, in practice it is very
common to put an array of values inside a buffer. You can, for example, put an array of fifty
i32
s in a buffer with the Buffer::from_data
function.
However, in practice it is also very common to not know the size of the array at compile-time. In
order to handle this, Buffer
provides a from_iter
constructor that takes an iterator to the
data as the last parameter, instead of the data itself.
In the example below, we create a buffer that contains the value 5
of type u8
, 128 times. The
type of the content of the buffer is [u8]
, which, in Rust, represents an array of u8
s whose
size is only known at runtime.
#![allow(unused)] fn main() { let iter = (0..128).map(|_| 5u8); let buffer = Buffer::from_iter( memory_allocator.clone(), BufferCreateInfo { usage: BufferUsage::UNIFORM_BUFFER, ..Default::default() }, AllocationCreateInfo { memory_type_filter: MemoryTypeFilter::PREFER_DEVICE | MemoryTypeFilter::HOST_SEQUENTIAL_WRITE, ..Default::default() }, iter, ) .unwrap(); }
Reading and writing the contents of a buffer
Once a CPU-accessible buffer is created, you can access its content with the read()
or write()
methods. Using read()
will grant you shared access to the content of the buffer, and using
write()
will grant you exclusive access. This is similar to using a RwLock
.
For example if buffer
contains a MyStruct
(see above):
#![allow(unused)] fn main() { let mut content = buffer.write().unwrap(); // `content` implements `DerefMut` whose target is of type `MyStruct` (the content of the buffer) content.a *= 2; content.b = 9; }
Alternatively, suppose that the content of buffer
is of type [u8]
(like with the example that
uses from_iter
):
#![allow(unused)] fn main() { let mut content = buffer.write().unwrap(); // this time `content` derefs to `[u8]` content[12] = 83; content[7] = 3; }
Just like the constructors, keep in mind that being able to read/write the content of the buffer like this is specific to buffer allocated in CPU-accessible memory. Device-local buffers cannot be accessed in this way.
Next: Example operation