PyTorch abstraction for parameters

Refer the enclosed screenshot.

Why would the quiz answer says that abstraction of parameters to update them can’t be done by using nn.parameter()? Is it referring that to use of linear and optim features of PyTorch, where the former abstracts all the parameters in a layer and the latter updates all? I asked the question because in the tutorial we see how nn.parameter abstraction updates all the parameters in one shot through the line:
~
param -= learning_rate * param.grad
~