Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GoogleNet failing with MXNET #82

Open
saonim opened this issue Feb 5, 2016 · 3 comments
Open

GoogleNet failing with MXNET #82

saonim opened this issue Feb 5, 2016 · 3 comments

Comments

@saonim
Copy link

saonim commented Feb 5, 2016

Hi, I tried to run googlenet script with mxnet. It is not working. Can you please provide a pointer?
gnetv1.py is copied from your repo: /soumith/convnet-benchmarks/tree/master/mxnet
work-station$ python gnetv1.py
('Temp Space: ', 'Total 3258 MB allocated')
('Avg forward per batch: ', 0.3881040978431702)
[09:27:57] ./dmlc-core/include/dmlc/logging.h:241: [09:27:57] ./mshadow/mshadow/./tensor_blob.h:617: Check failed: (this->shape_.Size()) == (shape.Size()) TBlob.get_with_shape: new and old shape do not match total elements
[09:27:57] ./dmlc-core/include/dmlc/logging.h:241: [09:27:57] src/engine/./threaded_engine.h:295: [09:27:57] ./mshadow/mshadow/./tensor_blob.h:617: Check failed: (this->shape_.Size()) == (shape.Size()) TBlob.get_with_shape: new and old shape do not match total elements
An fatal error occurred in asynchronous engine operation. If you do not know what caused this error, you can try set environment variable MXNET_ENGINE_TYPEto NaiveEngine and run with debugger (i.e. gdb). This will force all operations to be synchronous and backtrace will give you the series of calls that lead to this error. Remember to set MXNET_ENGINE_TYPE back to empty after debugging.
terminate called after throwing an instance of 'dmlc::Error'
what(): [09:27:57] src/engine/./threaded_engine.h:295: [09:27:57] ./mshadow/mshadow/./tensor_blob.h:617: Check failed: (this->shape_.Size()) == (shape.Size()) TBlob.get_with_shape: new and old shape do not match total elements
An fatal error occurred in asynchronous engine operation. If you do not know what caused this error, you can try set environment variable MXNET_ENGINE_TYPEto NaiveEngine and run with debugger (i.e. gdb). This will force all operations to be synchronous and backtrace will give you the series of calls that lead to this error. Remember to set MXNET_ENGINE_TYPE back to empty after debugging.
Aborted (core dumped)
workstation$

@lukemetz
Copy link

lukemetz commented Feb 5, 2016

Are you on the most recent version of MXNet and mshadow? I just ran mine off of master and it worked fine.

@lingyanz
Copy link

I have the same problem

  1. CPU only
  2. simply excute https://github.com/soumith/convnet-benchmarks/blob/master/mxnet/gnetv1.py
  3. get above error on 16/08/12 (should add padding) and I found this error in Tag V0.7, Tag16016

@renganxu
Copy link

I have a different error:
[15:03:52] /root/DL/mxnet/dmlc-core/include/dmlc/logging.h:235: [15:03:52] src/operator/./pooling-inl.h:200: Check failed: param_.kernel[0] <= dshape[2] + 2 * param_.pad[0] && param_.kernel[1] <= dshape[3] + 2 * param_.pad[1] kernel size exceed input
Traceback (most recent call last):
File "gnetv1.py", line 88, in
g_exec = loss3_classifier.simple_bind(ctx=dev, grad_req="write", data=dshape)
File "/usr/lib/python2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/symbol.py", line 671, in simple_bind
arg_shapes, _, aux_shapes = self.infer_shape(*_kwargs)
File "/usr/lib/python2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/symbol.py", line 453, in infer_shape
return self.infer_shape_impl(False, *args, *kwargs)
File "/usr/lib/python2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/symbol.py", line 513, in infer_shape_impl
ctypes.byref(complete)))
File "/usr/lib/python2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/base.py", line 77, in check_call
raise MXNetError(py_str(LIB.MXGetLastError()))
mxnet.base.MXNetError: InferShape Error in pooling4: [15:03:52] src/operator/./pooling-inl.h:200: Check failed: param
.kernel[0] <= dshape[2] + 2 * param
.pad[0] && param
.kernel[1] <= dshape[3] + 2 * param
.pad[1] kernel size exceed input

Anyone knows how to solve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants