Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[wb_facq_core] Sample widths with more than 128 bits should be spaced by at least 'width / 128' cycles, otherwise data will be lost #19

Open
augustofg opened this issue Apr 17, 2023 · 0 comments

Comments

@augustofg
Copy link
Member

Due the way the wb_facq_core is implemented, is not possible to acquire samples that are not spaced by at least 'width / 128' cycles, because there is no FIFO before the gc_word_packer component. The d_req_o signal is ignored, so no back-pressure is applied to the incoming data:

gen_packer_wider : if g_facq_channels(i).width >= c_max_channel_width generate
assert (g_facq_channels(i).width mod c_max_channel_width = 0)
report "[wb_facq_core] g_facq_channels(" & Integer'image(i) &
").width must divide c_max_channel_width (" & Integer'image(c_max_channel_width) &
")"
severity failure;
cmp_gc_word_packer : gc_word_packer
generic map (
g_input_width => to_integer(g_facq_channels(i).width),
g_output_width => c_max_channel_width
)
port map (
clk_i => fs_clk_i,
rst_n_i => fs_rst_n_i,
d_i => acq_val_i(i)(to_integer(g_facq_channels(i).width)-1 downto 0),
d_valid_i => acq_dvalid_i(i),
d_req_o => open,
q_o => acq_val_full_unpack_array(i)(c_max_channel_width-1 downto 0),
q_id_o => acq_val_id_unpack(i)(f_log2_size(to_integer(g_facq_channels(i).width)/c_max_channel_width + 1) -1 downto 0),
q_valid_o => acq_dvalid_unpack(i),
q_req_i => '1'
);
end generate;

To remove this limitation, an FIFO could be used, but this come with hardware cost that can be problematic when using multiple channels. Maybe the best long-term solution would be refactoring the wb_acq_core implementation to offer an flexible interface from the start, and use its internal FIFOs to manage the data bursts it would experience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant