This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function oe; if the two nets have equal behaviors as \black boxes" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights.