You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm debugging my assignment 2 code, and I find this issue. Maybe you would want to take a look at it?
In demo_controller.py, the control method does a min-max scaling:
defcontrol(self, inputs, controller):
# Normalises the input using min-max scalinginputs= (inputs-min(inputs))/float((max(inputs)-min(inputs)))
The problem is that min-max scaling should be done on the entire data set, not on individual inputs, otherwise, it will result in distorted data ranges.
In this case, this scaler will completely shadow the inputs[2] and input[3] because the original value range of these two is [-1, 1], while all other value ranges are wider than [-400, 400] based on my test. This means that after scaling, the original value of inputs[2] and inputs[3] won't matter because it is not significantly big enough compared to other values. This is bad because these two inputs represent "player's direction" and "enemy's direction", but now these two values in this case can not be recognized by NN.
The following graph is an illustration. I collected some data from the sensor to draw this figure, the y-axis is the scaled inputs[2], and the x-axis is calculated without original inputs[2] and inputs[3]. They show a directly proportional relationship with k=1, meaning that they are basically the same and the scaled inputs[2] is irrelevant with inputs[2] from the sensor.
I'll suggest to change it to something like this, which will make the scaling more resonable:
Hi, I'm debugging my assignment 2 code, and I find this issue. Maybe you would want to take a look at it?
In
demo_controller.py
, thecontrol
method does a min-max scaling:The problem is that min-max scaling should be done on the entire data set, not on individual inputs, otherwise, it will result in distorted data ranges.
In this case, this scaler will completely shadow the inputs[2] and input[3] because the original value range of these two is [-1, 1], while all other value ranges are wider than [-400, 400] based on my test. This means that after scaling, the original value of inputs[2] and inputs[3] won't matter because it is not significantly big enough compared to other values. This is bad because these two inputs represent "player's direction" and "enemy's direction", but now these two values in this case can not be recognized by NN.
The following graph is an illustration. I collected some data from the sensor to draw this figure, the y-axis is the scaled inputs[2], and the x-axis is calculated without original inputs[2] and inputs[3]. They show a directly proportional relationship with k=1, meaning that they are basically the same and the scaled inputs[2] is irrelevant with inputs[2] from the sensor.
I'll suggest to change it to something like this, which will make the scaling more resonable:
Maybe you would want to fix this after the assignment 2 finished? Since lots of groups already working on their project based on this NN structure.
The text was updated successfully, but these errors were encountered: