Problem with high-resoltion CLM and int2clm – in #14: CCLM-CLM

in #14: CCLM-CLM

<p> Dear colleagues, </p> <p> I’m trying to launch <span class="caps"> CLM </span> model with 1-km resolution for the region around Moscow and St. Peterburg in Russia in nesting mode, and have some problem: </p> <p> 1) I’ve successfully launched int2clm and <span class="caps"> CLM </span> and calculated the experiment for big domain (30*30 degrees) with resolution of 0.1 degree. <br/> 2) Then I’ve tried to launch int2clm in nesting mode for small domain with resolution 0.01 degree, and int2lm failed with error “ <span class="caps"> ERROR </span> in interp_l: unique lolp_lm”. <br/> 3) Then I’ve found in the code of intclm the reference to this problem and option for possible solution (lfill_up = .TRUE in external_data.src), changed in, recompiled int2lm. After this int2lm successfully calculated initial and boundary conditions for my grid in nesting mode. <br/> 4) But when I’ve launched <span class="caps"> CLM </span> for this small grid, in failed in the begining of calculation, because at the first time step some model cells (which are part of small lakes) have NaN values. </p> <p> Do anybody knows the possible solution of my problem? </p> <p> I’m attaching external parameters files, sh scripts and output file with NaN values to this post. </p> <p> Thanks in advance, <br/> Mikhail </p>

  @mikhailvarentsov in #c0f3526

<p> Dear colleagues, </p> <p> I’m trying to launch <span class="caps"> CLM </span> model with 1-km resolution for the region around Moscow and St. Peterburg in Russia in nesting mode, and have some problem: </p> <p> 1) I’ve successfully launched int2clm and <span class="caps"> CLM </span> and calculated the experiment for big domain (30*30 degrees) with resolution of 0.1 degree. <br/> 2) Then I’ve tried to launch int2clm in nesting mode for small domain with resolution 0.01 degree, and int2lm failed with error “ <span class="caps"> ERROR </span> in interp_l: unique lolp_lm”. <br/> 3) Then I’ve found in the code of intclm the reference to this problem and option for possible solution (lfill_up = .TRUE in external_data.src), changed in, recompiled int2lm. After this int2lm successfully calculated initial and boundary conditions for my grid in nesting mode. <br/> 4) But when I’ve launched <span class="caps"> CLM </span> for this small grid, in failed in the begining of calculation, because at the first time step some model cells (which are part of small lakes) have NaN values. </p> <p> Do anybody knows the possible solution of my problem? </p> <p> I’m attaching external parameters files, sh scripts and output file with NaN values to this post. </p> <p> Thanks in advance, <br/> Mikhail </p>

Problem with high-resoltion CLM and int2clm

Dear colleagues,

I’m trying to launch CLM model with 1-km resolution for the region around Moscow and St. Peterburg in Russia in nesting mode, and have some problem:

1) I’ve successfully launched int2clm and CLM and calculated the experiment for big domain (30*30 degrees) with resolution of 0.1 degree.
2) Then I’ve tried to launch int2clm in nesting mode for small domain with resolution 0.01 degree, and int2lm failed with error “ ERROR in interp_l: unique lolp_lm”.
3) Then I’ve found in the code of intclm the reference to this problem and option for possible solution (lfill_up = .TRUE in external_data.src), changed in, recompiled int2lm. After this int2lm successfully calculated initial and boundary conditions for my grid in nesting mode.
4) But when I’ve launched CLM for this small grid, in failed in the begining of calculation, because at the first time step some model cells (which are part of small lakes) have NaN values.

Do anybody knows the possible solution of my problem?

I’m attaching external parameters files, sh scripts and output file with NaN values to this post.

Thanks in advance,
Mikhail

View in channel
<p> Maybe you introduced some inconsistencies by changing lfill_up. Can you upload the lffd2010070100c.nc file, too? </p>

  @burkhardtrockel in #ddf4c45

<p> Maybe you introduced some inconsistencies by changing lfill_up. Can you upload the lffd2010070100c.nc file, too? </p>

Maybe you introduced some inconsistencies by changing lfill_up. Can you upload the lffd2010070100c.nc file, too?

<p> I’ve tried to look for this file again and noticed that it doesn’t exist. I’ve launched another attemp to calculate the same experiment, maybe it appears :) </p> <p> And maybe there are any possible solution for the problem with int2lm without changing lfill_up? </p>

  @mikhailvarentsov in #22665a5

<p> I’ve tried to look for this file again and noticed that it doesn’t exist. I’ve launched another attemp to calculate the same experiment, maybe it appears :) </p> <p> And maybe there are any possible solution for the problem with int2lm without changing lfill_up? </p>

I’ve tried to look for this file again and noticed that it doesn’t exist. I’ve launched another attemp to calculate the same experiment, maybe it appears :)

And maybe there are any possible solution for the problem with int2lm without changing lfill_up?

<p> In clm_nest.sh you set lwrite_const=.TRUE., in the <span class="caps"> GRIBOUT </span> namelist. Therefore the file should be in the out01 directory in your case. </p>

  @burkhardtrockel in #3f2c7a7

<p> In clm_nest.sh you set lwrite_const=.TRUE., in the <span class="caps"> GRIBOUT </span> namelist. Therefore the file should be in the out01 directory in your case. </p>

In clm_nest.sh you set lwrite_const=.TRUE., in the GRIBOUT namelist. Therefore the file should be in the out01 directory in your case.

<p> The same problem in experiment with 0.02 degree resolution. Base domain is the same. Now I’m attaching also ‘….c.nc’ files. </p>

  @mikhailvarentsov in #a15688e

<p> The same problem in experiment with 0.02 degree resolution. Base domain is the same. Now I’m attaching also ‘….c.nc’ files. </p>

The same problem in experiment with 0.02 degree resolution. Base domain is the same. Now I’m attaching also ‘….c.nc’ files.

<p> My guess is that there is either a problem with the external data which cannot be seen on the first glance or the namelist settings are not appropriate for the high resolution simulation. I hope that someone with experience in very high resolution can help in the latter case. </p>

  @burkhardtrockel in #966fb5b

<p> My guess is that there is either a problem with the external data which cannot be seen on the first glance or the namelist settings are not appropriate for the high resolution simulation. I hope that someone with experience in very high resolution can help in the latter case. </p>

My guess is that there is either a problem with the external data which cannot be seen on the first glance or the namelist settings are not appropriate for the high resolution simulation. I hope that someone with experience in very high resolution can help in the latter case.

<p> maybe try l_cressman = .TRUE. in the int2lm script. <br/> Are you using <span class="caps"> GPU </span> ’s for the calculation ? (sbatch -N $NP1 <strong> -p gpu ompi </strong> $DirInt2_Exe/int2_clm_cheat ) I dont have any experience using <span class="caps"> GPU </span> ’s or ompi. <br/> Try to switch to normal <span class="caps"> CPU </span> s.( fewer are better but slower of course) </p>

  @lukasschefczyk in #57a8da5

<p> maybe try l_cressman = .TRUE. in the int2lm script. <br/> Are you using <span class="caps"> GPU </span> ’s for the calculation ? (sbatch -N $NP1 <strong> -p gpu ompi </strong> $DirInt2_Exe/int2_clm_cheat ) I dont have any experience using <span class="caps"> GPU </span> ’s or ompi. <br/> Try to switch to normal <span class="caps"> CPU </span> s.( fewer are better but slower of course) </p>

maybe try l_cressman = .TRUE. in the int2lm script.
Are you using GPU ’s for the calculation ? (sbatch -N $NP1 -p gpu ompi $DirInt2_Exe/int2_clm_cheat ) I dont have any experience using GPU ’s or ompi.
Try to switch to normal CPU s.( fewer are better but slower of course)

<p> I think that gpu is not a source of problem: it’s just a name of squeue at out cluster, it is also possible to use it for usual mpi applications. And this problem also appears when I use standard squeue. </p> <p> I’ve tried to use l_cressman = .TRUE, but nothing changed. It was again the error with lolp_lm when I launched int2lm lfill_up = false and error with NaN values if I used in2clm with lfill_up = true. </p>

  @mikhailvarentsov in #717a9b8

<p> I think that gpu is not a source of problem: it’s just a name of squeue at out cluster, it is also possible to use it for usual mpi applications. And this problem also appears when I use standard squeue. </p> <p> I’ve tried to use l_cressman = .TRUE, but nothing changed. It was again the error with lolp_lm when I launched int2lm lfill_up = false and error with NaN values if I used in2clm with lfill_up = true. </p>

I think that gpu is not a source of problem: it’s just a name of squeue at out cluster, it is also possible to use it for usual mpi applications. And this problem also appears when I use standard squeue.

I’ve tried to use l_cressman = .TRUE, but nothing changed. It was again the error with lolp_lm when I launched int2lm lfill_up = false and error with NaN values if I used in2clm with lfill_up = true.

<p> Maybe is it possible to switch of ‘M’atch interpolation and change in for linear? </p>

  @mikhailvarentsov in #bae9300

<p> Maybe is it possible to switch of ‘M’atch interpolation and change in for linear? </p>

Maybe is it possible to switch of ‘M’atch interpolation and change in for linear?