When I ran the code for first time, everything was fine…But when i tried to give second, third attempt this error pops up…Kindly give me a solution
It’s because you should have done
Do not assign it back to
Try doing the following instead.
Y_predict = Y_predict.detach().cpu().numpy()
Seems unusual, can you try writing
torch.cuda.set_device(1) before this snippet?
Thank you sir …its solved…X_val was not moved to coda before.
please help me with this question
What are the reasons for this:
and how to solve.
Hi, Can you please attach a screenshot or some snippet/ notebook for reference?
Traceback (most recent call last): File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 2447, in wsgi_app response = self.full_dispatch_request() File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File “/usr/local/lib/python3.7/site-packages/flask_cors/extension.py”, line 161, in wrapped_function return cors_after_request(app.make_response(f(*args, **kwargs))) File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File “/usr/local/lib/python3.7/site-packages/flask/_compat.py”, line 39, in reraise raise value File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1950, in full_dispatch_request rv = self.dispatch_request() File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File “/usr/local/lib/python3.7/site-packages/flask_cors/decorator.py”, line 128, in wrapped_function resp = make_response(f(*args, **kwargs)) File “MLapi/app_flask.py”, line 146, in recom pred_infer = infer.predict(data) File “/usr/src/app/EmotionBert/inference.py”, line 50, in predict ids.to(self.device, dtype=torch.long) RuntimeError: CUDA error: an illegal memory access was encountered
Hi, can you please print
self.device for a reference.
Just to add, if you’re using GPU as the device, did you use the following snippet?