NAG doubt in coding

in theory session, Prof told us the NAG algo in following ways

#compute look ahead values

w = w - gamaa*v_w

b = b - gamma*v_w

#compute gradient using these values

dw += grad_w(X,Y, w, b)

db += grad_b(X,Y,w,b)

#then move in the direction of that gradient

w = w - eta*dw

b = b - eta*db

#then update history

v__w= gamma*v__w + eta*dw

v_b_= gamma*v_b_ + eta*db

But in hands on session, He changed the process and first find look ahead values and then find gradients using these values then update the history using gradients, then update parameters using that history

can please someone clarify this