Push notifications in your browser are not yet configured.
You are not logged in, you may not see all content and functionalities. If you have an account, please login .
CCLM failed with error code 1004
Hello,
I have tried to start my erainterim Simulation for the Extended EURO - CORDEX area again. But somehow, I got this interpolation error when CCLM is running, which I can’t place (see below).
So far, I used the same number of levels as before (40) and the ERAI nterim caf files had 60 levels. But as far as I can tell, I use the same script as before Manuel to create the input files.
Any idea anyone?
Cheers, and have a nice weekend!
Jenny
Rank 15 [Thu Nov 21 12:28:47 2019] [c2-1c1s10n1] application called MPI _Abort(comm=0×84000002, 1004) – process 15
-
PROGRAM
TERMINATED
BECAUSE
OF
ERRORS
DETECTED
-
IN
ROUTINE
: p_int
-
IN
ROUTINE
: p_int
-
PROGRAM
TERMINATED
BECAUSE
OF
ERRORS
DETECTED
*——————————————————————————————
——————————————————————————————
——————————————————————————————
Can you provide the files
INPUT
andOUTPUT
of the job, please.Hello,
here are the files from int2lm.
Cheers,
Jenny
These files look fine to me. I forgot to ask for the
YUSPECIF
file of the CCLM simulation. Can you provide that one, too, please?Her it is!
:)
Have you checked the output files from INT2LM (las and lbfd) and CCLM (lffd)? Are the numbers OK? If a “pint” error occurs it is most likely that there are some inconsistencies in the initial/boundary data set that causes CCLM to produce unrealistic values and then the “pint” causes a crash.
FYI : The following namelist parameters are different to yours from those I have used in a successful simulation (and very likely the external data sets are different).
in INT2LM namelist
In CCLM namelist
Dear colleague,
I have the same problem by running the CCLM model. I ever run CCLM simulation successfully a couple of months ago, but when I restart to run the same simulation now, there is an error as:
* PROGRAM TERMINATED BECAUSE OF ERRORS DETECTED
* IN ROUTINE: p_int
...skipping one line
* ERROR CODE is 1004
* plev for interpolation above model top!
*------------------------------------------------------------*
*------------------------------------------------------------*
I noticed that there is also a warning:
!!!!*** WARNING ***!!! CFL-criterion for horizontal advection is violated
!!!! Max ( |cfl_x| + |cfl_y| ) = 72.4515109843900 ( > 0.95 *
1.61000000000000 )
However, my setups for the simulation are the same as the successful simulation. Does anyone have any idea? Thank you!
Best regards,
Delei
Hey Delei,
generally I get this error when my model crashes, that is: there are NA values in the variables. This problem seems to cause ERROR 1004 when the output is written for pressure levels.
So whenever this happens I first check the CFL-criterion and if it is violated in the steps prio to the crash, I assume a normal instability of the simulation and change the timestep “dt” in the namelist. (normally lowering it)
If this is not the cause of the problem, I fall back to writing output for single time steps by changing the namelist: putting a “!” in front of [ hcomb=…. ] and [ ytunit='d' ] to comment it out
and adding
[ ngrib=0,1,2,3,4,5,10,20,30 ! selected time steps to write output ]
and [ ytunit='f' ]
normally from the output I can trace back to where the error occured. In which variable the NA first appeared and maybe even where inside the domain and so on…
-----
Concerning your problem, that you used the same setup, I have no idea. But maybe if you try to trace back the error you may find the reason it is not working anymore.
For me it would always be something like maybe a changed default version of cdo that causes problems in my preprocessing setup that results in a model crash (or a temporal/testing change I forgot to change back and didn´t remember…)
just to make sure, maybe compare your YUSPECIF from an old simulation where it worked and the current one, to make sure the setup is really the same. ( you can use the diff comand on levante, that is “diff old_YUSPECIF new_YUPECIF” )
-----
good luck debugging
Rolf
Dear Rolf,
Thank you very much for your comprehensive comments. I have tried them one by one.
(1) I have reduced dt from 18 seconds to 9 seconds for a resolution of 0.0275 degree, the issue remains. According to the other (and my previous) successful setups, 18s should be fine.
(2) I indeed have two versions of cdo installed, however, I tried both versions, and the issue remains the same. I am not sure whether there are any other environment setups that I have changed that could possibly cause the problem… need to have further check.
(3) I made “diff old_YUSPECIF new_YUPECIF”, and they are the same.
(4) I tried to have more frequent outputs and noticed that the NaN values arise from a specific location in the variables PS and PMSL and then spread to other variables and regions. My dt is 9s and the NaN occurred between 27s and 36 s. I also checked the YUPRHUMI and YUPRMASS, there are almost all NaN values except ntstep=0.
It seems that I cannot attach files in the channel. Thanks in advance for any further comments.
Cheers,
Delei
Dear Delei,
a collegue of mine had also cases where decreasing the timestep helped...
if you test for example dt = 10,15,20,30 you could compare at which time (not steps, but time) the first NaN ocurr and if increasing or decreasing makes it better/worse.
(oh and does the CFL-criterion still get violated with other timesteps?)
But overall since it happens in the first minute, I guess it won´t help...
I have had several cases when we had no problem running 15, and 5 km resolution, but with finer resolution (nested inside) sometimes the simulation never became stable enough no matter the timestep. And we didn´t spend more time on it, but just skipped those.
--------
in your case I guess (4) is best to trace back the Problem, since you know now a location, where it is happening. (how do other varibles look/behave there just before it gets NaN?)
I had one case, where there was a very steep mountain ( >500m height increase from one grid point to the next ) where wind speeds an temperature would get unrealistic values before the crash.
Another case was where single grid points just had very unrealistic cold temperatures, that could not be fixed and normaly did not cause a problem. But sometimes the temperature would drop even lower and than NaN would spread from this gridpoints.
Quick idea, if the point is not in the middle but near the border of your domain: just reduce the domainsize to exclude the point. I think you just have to change [ startlon_tot, startlat_tot, ie_tot, je_tot] so it should be no effort?
if it works afterwards, you may be more certain that the problem is really at that location, and not a more generall problem that occurs first at that point, but would also happen 1-2 minutes later at other points.
Cheers
Rolf
Dear Rolf,
Thank you very much. I tried many different setups and dt. But failed for all. Specifically, for dt=18 s, I run the same simulation for four times and stored the related outputs, and found that for each simulation the NAN values are different. The NAN values first appear at different locations/different times/different variables for the four simulations. This is a really strange and annoying issue. I am not sure whether it is related to the environment setups or the supercomputer. will consult IT about this.
As you said, “we didn´t spend more time on it, but just skipped those.” What would you suggest? Should I change the model version, or model domain? Thank you.
Cheers,
Delei
Dear Delei,
I never encountered this kind of variablilty (4 different outputs) with the same setup on the same computer. I often compare different model version (when we change something) and from this I know that even after 30h simulations the difference of output fields is zero (at least in ncview) - at least when I ran it on the same computer.
[ if you wanne track down the problem ]
are the non-NAN values of the 4 simulations the same?
-> if no, I guess it's a wierd chaos effect that causes the crash to happend differently. And I would try to get a stable setup that doesn´t produce this chaos effect anymore... (using the same node of the supercomputer for example - if the nodes have different setups)
-> if yes, I have no Idea, and would start suspecting a faulty hardware causing random NAN.
-> using only 1 cpu/core (procx=1,procy=1) may be a setup that you could also test. As far as I know, this should not change the simulation output. But I had a case (with INT2LM), where a bug occured only if parallel computing was used.
[ if you want a workaround ]
Do you have any working setup right now, from which you could deviate slowly towards the setup you need... step by step, always checking when the error occurs?
Cheers
Rolf
Dear Rolf,
Thank you very much for your helpful comment. I finally figured out the issue. I reduced the CPU numbers for CCLM simulation from 28*32 to 16*24, then the simulation can run successfully. I am not very clear about the underlying reason. It maybe related to some unstable information exchange between nodes. Anyway, it has been solved. I greatly appreciate your great help!
Best regards,
Delei
Dear Delei,
I am experiencing the same problem right now, and I am using 5*8 (40 cores) for my gcm2cclm runs. Is it because of the proportion between 5:8? What would you recommend?
Regards,
Emre
Dear Emre,
For my previous issue, it was due to unstable nodes of supercomputer. If the IT staff delete the unstable nodes, the simulation would run successfully. For your case, it is possible the same problem? I am not sure. You may reduce your cores and see what happens.
Regards,
Delei
Dear Delei,
Thank you very much for your quick reply. As I was first trying to achieve a 0.11 resolution (to further downscale to 0.025), I changed the DT formula in the cclm.job.sh script (in the gcm2cclm, line: 147) to have 0.11 in the denominator. I found out that was my fault. As I changed it back to 0.44, the CFL failure disappeared.
I have another question, though; I am hesitating to ask it under this thread. My supercomputer won't allow me to run the model in multiple nodes. When I allocate more CPUs (let's say 80 cores for 2 nodes), cclm jobs finish very quickly (and incomplete) under multiple nodes. Do you have any clue why I face this issue?
Regards,
Emre
Dear Emre,
Nice to know that you solved the problem. I have no idea why it cannot be run in multiple nodes for your supercomputer. You may consult your IT people for this?
Regards,
Delei
Dear Delei,
Thank you for your kind message and suggestion. I will carry this issue with them.
Regards,
Emre
Dear Delei,
Hello again. I've been trying to run the model in multiple nodes, trying various methods. However, I'm failing to do so. Is it possible for you to share a working int2lm.job.sh, cclm.job.sh and also job_settings as an example. I would like to see how you configured those settings.
Yours,
Emre
Dear Emre,
I have no idea how to upload files in the channel. Please write an email to me deleili@qdio.ac.cn
I will send to you through email.
Cheers,
Delei