-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CCMpred is limited to MSA length (ncols) = 1787 residues #34
Comments
On CPU, using gdb, the encountered bug is:
|
Hi @jhschwartz, thanks for the useful analysis! Also, I wondered if changing the type of nvar_padded to unsigned int instead of long might avoid the downstream errors in CUDA and libconjugrad. If it works it would give you another factor 1.412 in the maximum number of columns. |
Sorry for the delayed/nonexistant responses. I've changed responsibilities and don't find the time (or access to suitable GPUs) to maintain this anymore or look at this issue in depth, but here are some ideas of things to try. As Johannes has mentioned, you might try to recompile CCMpred with the It might be that there are some systematic problems hiding in the code that prohibits you from working with large MSAs, and that the small change is not really going to solve your problem. In that case, combing over both the CPU and GPU code to look for potential integer overflows sounds like a good strategy. |
Hi @soeding @sseemayer ! I wanted to try out your suggestions before replying so I too am sorry for the delay in my reply. I can confirm that compiling with padding off increases the MSA width limit/decreases the required memory, although changing the type of nvar_padded still causes downstream errors in libconjugrad. Fortunately, 2206 is wide enough for my use so there's no need to worry about it. Thanks again so much for your help! |
I came across the following issue when attempting to use CCMpred upon an MSA of length 1898 and depth 395.
Noticing that 18,446,744,043,827,191,608 is close to
ULLONG_MAX
, and noting that the expected needed RAM (according to @kWeissenow's fixed README memory calculation) is ~39GB, I figured this must be a case of a number being too large for its type.In
src/ccmpred.c
lines 387-390, we have:It seems that
nvar_padded
is the culprit. In my case, I haveN_ALPHA = 21
andN_ALPHA_PADDED = 32
. Forncols = 1898
,nvar_padded
should be2420853472
. This is aboveINT_MAX
,2147483647
, sonvar_padded
becomes negative, causing the bug. To keepnvar_padded
within its integer limit, the MSA length must be less than or equal to 1787.I've tried changing all cases of the four variables above to
long
type, but it's causing problems with CUDA and I am so far unable to find the source of the problem. While usingcuda-gdb
I get the following error:I'm unsure what to make of this. I will continue to work on a fix but if anyone has suggestions in the meantime, please let me know!
The text was updated successfully, but these errors were encountered: