Q5.Marks: +2.0UGC NET Paper 2: Computer Science and Application 26th June 2025 Shift 1
Given that n refers to learning rate and x_{i} refers to the ith input to the neuron, Which of the followings most suitably describes the weight updation rule of a Kohonen SOM? (where 'j' refers to the jth neuron in the lattice)
1.wij (new)=(1- eta)^ * w ii (old)+ eta^ * x i✓ Correct
2.w ij (new)= eta^ * w ij (old)+(1- eta)^ * x i
3.w ii (new)=(1- eta)^ * w ii (old)+x i
4.w ij (new)=w ij (o/d)+ eta^ * x i
Solution
The correct answer is Option 1) wij(new) = (1 − η) · wij(old) + η · xi
Key Points
Kohonen Self-Organizing Map (SOM) is an unsupervised neural network. For each input vector x, the neuron whose weight vector is closest to x is the winner/BMU (Best Matching Unit). The winner (and usually its neighbors) update their weights toward x.
General SOM update rule (vector form):
wj(new) = wj(old) + η · hj,BMU · (x − wj(old))
where η is learning rate and hj,BMU is the neighborhood coefficient (1 for the winner, between 0 and 1 for neighbors, 0 for others).
Rearranging the winner’s update (h = 1):
wj(new) = (1 − η) · wj(old) + η · x
In component form for input dimension i:
wij(new) = (1 − η) · wij(old) + η · xi
This is exactly Option 1.
This formula is a convex combination of the old weight and the input: the new weight moves a fraction η of the way from the old weight toward the input.
Step-by-step intuition
Compute distances from x to all weight vectors; choose the smallest distance neuron as the BMU.
Update the BMU’s weights: pull them toward x by fraction η. If η is large, move more; if small, move gently.
Optionally update neighbors using the same rule but with h < 1 so they move less.
Mini numeric example
Suppose one weight component is wij(old)=0.40, the corresponding input component is xi=1.00, and η=0.20 (winner neuron). Then
wij(new) = (1−0.20)·0.40 + 0.20·1.00 = 0.32 + 0.20 = 0.52 — the weight moved closer to the input.
Why the other options are incorrect
Option 2) w(new) = η·w(old) + (1−η)·x
Coefficients are swapped. When η is large (early training), this would keep the weight near the old value instead of moving it strongly toward the input, opposite to SOM learning behavior.
Option 3) w(new) = (1−η)·w(old) + x
Missing the factor η on x, so it is not a convex combination and can overshoot; not the SOM rule.
Option 4) w(new) = w(old) + η·x
Also missing the term that subtracts a fraction of w(old). This keeps pushing weights outward and does not converge properly.
Additional Notes
With neighborhood: wij(new) = (1 − η·h) · wij(old) + η·h · xi, where h decreases with distance from the BMU.
Both η and the neighborhood width typically decay over time to ensure convergence.
Indices: i indexes the input component; j indexes the neuron in the SOM lattice.