Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DYNAP-SE2] Output layer's neurons seem to be missing in the generated hardware configuration #10

Open
MarcoBramini opened this issue Jun 7, 2023 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@MarcoBramini
Copy link

I'm opening this issue because i couldn't find a way to identify the output neurons, after hardware deployment, on DYNAP-SE2:

  • The generated hardware configuration object, after network mapping, doesn't contain any reference to them: contrarily to what happens for the input neurons that are referenced and tracked in the input_channel_map attribute.
  • The output neurons seem to be missing in the configuration object and, therefore, not allocated (see attached code snippet)
n_input_channels = 12
n_population = 32
n_output_channels = 2

net = Sequential(
    LinearTorch((n_input_channels, n_population)), # 12 Input neurons
    LIFTorch(n_population, **neuron_parameters), # 32 Neurons
    LinearTorch((n_population, n_population)),
    LIFTorch(n_population,has_rec=True, **neuron_parameters), #32 Neurons
    LinearTorch((n_population, n_output_channels)),
    LIFTorch(n_output_channels, **neuron_parameters), # 2 Output Neurons
) # Tot neurons: 12+32+32+2 = 78

net_graph = net.as_graph()
spec = mapper(net_graph)

spec["Iscale"] *= 10

spec.update(autoencoder_quantization(**spec))
config = config_from_specification(**spec)

print(spec['n_neuron'] ) # Correctly prints 66 neurons (not considering the 12 input neurons)

# Print all synapses tags for every allocated neuron
tag = []
for core in config["config"].chips[0].cores:
    for neuron in core.neurons:
        for synapse in neuron.synapses:
            tag.append(synapse.tag)
print(np.unique(tag))
# Prints the tags of 76 neurons (but they should be 78): 
# [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
# 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
# 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
# 72 73 74 75]

# Print all destinations tags for every allocated neuron
tag = []
for core in config["config"].chips[0].cores:
    for neuron in core.neurons:
        for destination in neuron.destinations:
            if destination.x_hop != -7 and destination.tag != 0:
                tag.append(destination.tag)
print(np.unique(tag))
# Prints the tags of 64 neurons (but they should be 66):
# [12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
# 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
# 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75]
@DylanMuir DylanMuir added the bug Something isn't working label Jun 8, 2023
@ugurcancakal
Copy link
Contributor

@MarcoBramini thanks for pointing out this.

First of all, tags do not necessarily need to be equal to the number of neurons. That's because they are used only if there is a connection. In short, You can see that the last two columns of spec['weights_rec'] are fully zero. In that case, no CAM will be allocated for the last two neurons.


Let's put a breakpoint in config_from_specification line 186 sram = allocator.SRAM_content(. There you can access to allocator object.

You can see that

allocator.n_in = 12
allocator.n_neuron = 66

that's the same as the spec dictionary.

There if you call allocator.tag_selector(), it will return the tags allocated for input connections, recurrent connections, and output connections.

For input connections you'll see that [0..11] are allocated, and for recurrent connections [12..77] are allocated.

But the issue is that the last two tags 76 and 77 are not being used by any connection.

In allocator line 201 content_rec = self.matrix_to_synapse( you can see that self.weights_rec is being used as a reference matrix to create the CAM content. Since the last two rows of self.weights_rec consist of only 0s, no CAM will be allocated!

You can also see that in spec['weights_rec'] The last two columns of it are fully zero.

The reason of that is the autoencoder_quantization(**spec) step. There the output weights could not survive and they are unfortunately pruned.

@ugurcancakal
Copy link
Contributor

You can force quantized 'weights_rec' to have some connections just like this

...
spec.update(autoencoder_quantization(**spec))

spec['weights_rec'][0][-1][0] = 1
spec['weights_rec'][0][-2][0] = 1

config = config_from_specification(**spec)
...

In that case, you'll see that 77 tags will be used.

@DylanMuir
Copy link
Member

@ugurcancakal in Marco's original case, how can he identify the hardware neuron IDs of the output neurons? Can he use the mapped graph, for example?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants