DLOAD subroutine not working on parallel mode Yahoo / Sent

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

DLOAD subroutine not working on parallel mode Yahoo / Sent

arun7328
Dear All,
 

 I have written a DLOAD subroutine which will read Pressure on a surface from an external file and run a direct implicit transient analysis. The pressure on the surface elements will be read from respective files for each time step. I have checked this on my computer in serial mode for 100 elements and 5 time steps and it works fine.This is the same in parallel mode too. However, when I increase the number of elements in the file to 1000 and put it on parallel the simulations does not work. It compiles fine and starts running but it only reads a few elements and halts. There are no error messages and the status shows running but it is not actually. The same is the situation when I put it on cluster with 16 CPUs. I am not able to understand what the problem could be. It has something got to reading these files in parallel and checking the element number and mapping the pressure for each time step.
 

 I would really appreciate if someone could please help me or guide me regarding this.
 

 My DLOAD sub routine is below.
 

       SUBROUTINE DLOAD(P,KSTEP,KINC,TIME,NOEL,NPT,LAYER,KSPT,COORDS,
      1                 JLTYP,SNAME)
 C
       INCLUDE 'ABA_PARAM.INC'
 C
       DIMENSION TIME(2),COORDS(3)
       CHARACTER*80 SNAME
       PARAMETER (PLOAD1=1000)
       CHARACTER PRES*80
   INTEGER  :: SUCCESS
   INTEGER  :: K, L, M, ELNO, T, F, I, NN    ! K is for the element number,L is word P and M is for pressure value
 REAL     :: PLOAD
       CHARACTER (len=100) :: pname  
       CHARACTER (len=30)  :: loadfile  
 

          DO I=1,5    ! NUMBER OF INCREMENTS
 IF (KINC.EQ.I) THEN
 WRITE (loadfile,'("output",I0,".inp")') I
         pname="C:\Users\lakshmp1\L1000\"//loadfile
 PRINT *,pname
                        NN=100+I    
                        OPEN(UNIT=NN, FILE=pname, ACCESS='SEQUENTIAL', ACTION='READ')    
                        T=1
        DO T=1,1     ! skips the first line which is named as *DLOAD
              READ(NN,*)
        END DO
        DO
                              READ(UNIT=NN, FMT='(A)', IOSTAT=SUCCESS, END=9)PRES
            IF (SUCCESS.NE.0) EXIT
 

                                    K=6
            DO WHILE (PRES(K:K) .NE. ',')
                 K=K+1
            END DO
 

            READ(PRES(6:K-1), *) ELNO   ! this variable stores the element number
            PRINT *, ELNO
 
           L=K+1
           DO WHILE (PRES(L:L) .NE. ',')
                 L=L+1
           END DO
 

 
           M=L+1
           DO WHILE (PRES(M:M) .NE. ' ')
                M=M+1
           END DO
 

 
          READ(PRES(L+1:M-1), *) PLOAD  ! this variable stores the pressure on that element
          PRINT *, PLOAD
                          IF(NOEL.EQ.ELNO) THEN
                              P=PLOAD
                                     PRINT *, 'PLOAD written'  
         END IF
 
                END DO        
 9                  CLOSE(NN)      
                END IF
   END DO    
       RETURN
       END



 

 Thank You.
 

 Arun
Reply | Threaded
Open this post in threaded view
|

Re: DLOAD subroutine not working on parallel mode Yahoo / Sent

Martin Bäker
I do not fully understand how the parallel execution works, but perhaps
you can read the data first into a common block and then access it from
there. However, I am nut sure how common blocks work in parallel
fortran.

There are auxilliary routines that can find out on which processor you
are - so if you do this only on one processor, the reading problem
should be circumvented.

You could also try to run in loop-parallelization although this might
be less efficient.

On 02 May 2019 10:23:54 +0000
"[hidden email] [Abaqus]" <[hidden email]> wrote:

> Dear All,
>
>
>  I have written a DLOAD subroutine which will read Pressure on a surface from an external file and run a direct implicit transient analysis. The pressure on the surface elements will be read from respective files for each time step. I have checked this on my computer in serial mode for 100 elements and 5 time steps and it works fine.This is the same in parallel mode too. However, when I increase the number of elements in the file to 1000 and put it on parallel the simulations does not work. It compiles fine and starts running but it only reads a few elements and halts. There are no error messages and the status shows running but it is not actually. The same is the situation when I put it on cluster with 16 CPUs. I am not able to understand what the problem could be. It has something got to reading these files in parallel and checking the element number and mapping the pressure for each time step.
>
>
>  I would really appreciate if someone could please help me or guide me regarding this.
>
>
>  My DLOAD sub routine is below.
>
>
>        SUBROUTINE DLOAD(P,KSTEP,KINC,TIME,NOEL,NPT,LAYER,KSPT,COORDS,
>       1                 JLTYP,SNAME)
>  C
>        INCLUDE 'ABA_PARAM.INC'
>  C
>        DIMENSION TIME(2),COORDS(3)
>        CHARACTER*80 SNAME
>        PARAMETER (PLOAD1=1000)
>        CHARACTER PRES*80
>    INTEGER  :: SUCCESS
>    INTEGER  :: K, L, M, ELNO, T, F, I, NN    ! K is for the element number,L is word P and M is for pressure value
>  REAL     :: PLOAD
>        CHARACTER (len=100) :: pname
>        CHARACTER (len=30)  :: loadfile
>
>
>           DO I=1,5    ! NUMBER OF INCREMENTS
>  IF (KINC.EQ.I) THEN
>  WRITE (loadfile,'("output",I0,".inp")') I
>          pname="C:\Users\lakshmp1\L1000\"//loadfile
>  PRINT *,pname
>                         NN=100+I
>                         OPEN(UNIT=NN, FILE=pname, ACCESS='SEQUENTIAL', ACTION='READ')
>                         T=1
>         DO T=1,1     ! skips the first line which is named as *DLOAD
>               READ(NN,*)
>         END DO
>         DO
>                               READ(UNIT=NN, FMT='(A)', IOSTAT=SUCCESS, END=9)PRES
>             IF (SUCCESS.NE.0) EXIT
>
>
>                                     K=6
>             DO WHILE (PRES(K:K) .NE. ',')
>                  K=K+1
>             END DO
>
>
>             READ(PRES(6:K-1), *) ELNO   ! this variable stores the element number
>             PRINT *, ELNO
>
>            L=K+1
>            DO WHILE (PRES(L:L) .NE. ',')
>                  L=L+1
>            END DO
>
>
>
>            M=L+1
>            DO WHILE (PRES(M:M) .NE. ' ')
>                 M=M+1
>            END DO
>
>
>
>           READ(PRES(L+1:M-1), *) PLOAD  ! this variable stores the pressure on that element
>           PRINT *, PLOAD
>                           IF(NOEL.EQ.ELNO) THEN
>                               P=PLOAD
>                                      PRINT *, 'PLOAD written'  
>          END IF
>
>                 END DO
>  9                  CLOSE(NN)
>                 END IF
>    END DO
>        RETURN
>        END
>
>
>
>
>
>  Thank You.
>
>
>  Arun


--
                   Priv.-Doz. Dr. Martin Bäker
                   Institut für Werkstoffe
                   Technische Universität Braunschweig
                   Langer Kamp 8
                   38106 Braunschweig
                   Germany
                   Tel.: 00-49-531-391-3065
                   Fax   00-49-531-391-3058
                   e-mail <[hidden email]>
                   http://www.tu-braunschweig.de/ifw/institut/mitarbeiter/baeker
                http://www.scienceblogs.de/hier-wohnen-drachen
                   Twitter: @Drachenblog
Reply | Threaded
Open this post in threaded view
|

Re: DLOAD subroutine not working on parallel mode Yahoo / Sent

arun7328
Hi Martin,  

 Thanks a lot for your answer. I have no idea of Common blocks or the auxiliary subroutines that you mentioned. I am very new to FORTRAN and did this full program in a  couple of days toiling hard. I will try to read about what pointers you gave and try to fix it. I am unable to run this in serial because my actual job has about 13000 surface elements.
 

 May be there is a more efficient way to read from a file and map the pressures in Abaqus.
 

 Regards,
 Arun
Reply | Threaded
Open this post in threaded view
|

RE: Re: [Abaqus] DLOAD subroutine not working on parallel mode Yahoo / Sent

David Lindeman
In reply to this post by Martin Bäker
From the manual (Execution->Parallel Execution->About parallel execution->Use with user subroutines):

User subroutines can be used when running jobs in parallel. In a distributed run, the entire model is decomposed into separate domains (partitions). Each domain is serviced by a separate MPI process. Abaqus provides well-defined synchronization points at which it is possible to exchange information across all MPI ranks, using the MPI communications facilities. All native MPI calls are supported, in both Fortran and C++. In addition, for cases of hybrid execution, user subroutines and any subroutines called by them must be thread safe. This precludes the use of common blocks, data statements, and save statements. To work around these limitations and for guidelines and techniques, see Ensuring thread safety<file:///C:/Program%20Files/Dassault%20Systemes/SIMULIA2018doc/English/SIMACAESUBRefMap/simasub-c-getthreadid.htm>.

Bottom line:  COMMON blocks can’t be used when running in parallel...

Regards,

Dave Lindeman
Staff Scientist
Corporate Research Systems Laboratory
3M Center 235-3G-08
St. Paul, MN 55144
651-733-6383

From: [hidden email] <[hidden email]>
Sent: Thursday, May 02, 2019 8:59 AM
To: [hidden email]
Subject: [EXTERNAL] Re: [Abaqus] DLOAD subroutine not working on parallel mode Yahoo / Sent



I do not fully understand how the parallel execution works, but perhaps
you can read the data first into a common block and then access it from
there. However, I am nut sure how common blocks work in parallel
fortran.

There are auxilliary routines that can find out on which processor you
are - so if you do this only on one processor, the reading problem
should be circumvented.

You could also try to run in loop-parallelization although this might
be less efficient.

On 02 May 2019 10:23:54 +0000
"[hidden email] [Abaqus]" <[hidden email]> wrote:

> Dear All,
>
>
> I have written a DLOAD subroutine which will read Pressure on a surface from an external file and run a direct implicit transient analysis. The pressure on the surface elements will be read from respective files for each time step. I have checked this on my computer in serial mode for 100 elements and 5 time steps and it works fine.This is the same in parallel mode too. However, when I increase the number of elements in the file to 1000 and put it on parallel the simulations does not work. It compiles fine and starts running but it only reads a few elements and halts. There are no error messages and the status shows running but it is not actually. The same is the situation when I put it on cluster with 16 CPUs. I am not able to understand what the problem could be. It has something got to reading these files in parallel and checking the element number and mapping the pressure for each time step.
>
>
> I would really appreciate if someone could please help me or guide me regarding this.
>
>
> My DLOAD sub routine is below.
>
>
> SUBROUTINE DLOAD(P,KSTEP,KINC,TIME,NOEL,NPT,LAYER,KSPT,COORDS,
> 1 JLTYP,SNAME)
> C
> INCLUDE 'ABA_PARAM.INC'
> C
> DIMENSION TIME(2),COORDS(3)
> CHARACTER*80 SNAME
> PARAMETER (PLOAD1=1000)
> CHARACTER PRES*80
> INTEGER :: SUCCESS
> INTEGER :: K, L, M, ELNO, T, F, I, NN ! K is for the element number,L is word P and M is for pressure value
> REAL :: PLOAD
> CHARACTER (len=100) :: pname
> CHARACTER (len=30) :: loadfile
>
>
> DO I=1,5 ! NUMBER OF INCREMENTS
> IF (KINC.EQ.I) THEN
> WRITE (loadfile,'("output",I0,".inp")') I
> pname="C:\Users\lakshmp1\L1000\"//loadfile
> PRINT *,pname
> NN=100+I
> OPEN(UNIT=NN, FILE=pname, ACCESS='SEQUENTIAL', ACTION='READ')
> T=1
> DO T=1,1 ! skips the first line which is named as *DLOAD
> READ(NN,*)
> END DO
> DO
> READ(UNIT=NN, FMT='(A)', IOSTAT=SUCCESS, END=9)PRES
> IF (SUCCESS.NE.0) EXIT
>
>
> K=6
> DO WHILE (PRES(K:K) .NE. ',')
> K=K+1
> END DO
>
>
> READ(PRES(6:K-1), *) ELNO ! this variable stores the element number
> PRINT *, ELNO
>
> L=K+1
> DO WHILE (PRES(L:L) .NE. ',')
> L=L+1
> END DO
>
>
>
> M=L+1
> DO WHILE (PRES(M:M) .NE. ' ')
> M=M+1
> END DO
>
>
>
> READ(PRES(L+1:M-1), *) PLOAD ! this variable stores the pressure on that element
> PRINT *, PLOAD
> IF(NOEL.EQ.ELNO) THEN
> P=PLOAD
> PRINT *, 'PLOAD written'
> END IF
>
> END DO
> 9 CLOSE(NN)
> END IF
> END DO
> RETURN
> END
>
>
>
>
>
> Thank You.
>
>
> Arun

--
Priv.-Doz. Dr. Martin Bäker
Institut für Werkstoffe
Technische Universität Braunschweig
Langer Kamp 8
38106 Braunschweig
Germany
Tel.: 00-49-531-391-3065
Fax 00-49-531-391-3058
e-mail <[hidden email]>
http://www.tu-braunschweig.de/ifw/institut/mitarbeiter/baeker
http://www.scienceblogs.de/hier-wohnen-drachen
Twitter: @Drachenblog





________________________________



3M Notice: This communication is from an [EXTERNAL] sender.
If this email looks suspicious, do NOT click or open any links or attachments in the email. To report a suspicious email, click on the Report Phishing – PhishMe icon in the Outlook ribbon or forward this email using the report email as spam link in the text below.

Click here<https://spam.mmm.com/pem/pages/digestProcess/digestProcess.jsf?content=aedaaa864ecbae9479bf42e0425f4d558c7c0ee7e8159730409a22fc855b12c37f84f37f885629e2d9f11863505cd5960ac616536ee4e23285b2a4560df440260013180be7aa6b29f483d9656800ab8a0cef2d769bfa28fc3dd789610b0c2c2ab1612e40217e4b49e7e0466518f3f7b7856ddd2d979f14dd660d8f279aa2e597241719dcf5bbcade> to report this email as spam





[Non-text portions of this message have been removed]

Reply | Threaded
Open this post in threaded view
|

RE: Re: [Abaqus] DLOAD subroutine not working on parallel mode Yahoo / Sent

arun7328
Hi David and Martin,
 

 Your suggestions were very useful and I think I have got it working now.
 

 My subroutine was not suited for parallel computing. It runs perfectly fine for smaller number of elements on serial processing. The issue was that when I run it on multiple processors all of them are reading the input file of pressure together and it results in a conflict resulting in halting; no error nothing the solver goes on idle mode. I had to read and probe a lot into Abaqus commands for thread parallelization. I now use two subroutines UEXTERNALDB and DLOAD. UEXTis called at the start of each increment to read the input file with pressures. Only a single thread is used to do this job and the other processors can access the data using shared memory. I stored the data in dynamic arrays using pointers and I just assign them in DLOAD for the element numbers. I have run the case now with 5 time-steps with 13,000 elements and 6 processors, and it takes just 10 seconds to solve. I hope that it will run for the same number of elements for 200-250 time-steps without any trouble.
 Thanks a lot.
 Arun
 

Reply | Threaded
Open this post in threaded view
|

RE: Re: [Abaqus] DLOAD subroutine not working on parallel mode Yahoo / Sent

arun7328
In reply to this post by David Lindeman
Hi,

 I have a small issue and since its connected with subroutine I would like some help. In my model I have now added acoustic element as a separate part and created an instance from the meshed part. When I run the same DLOAD subroutine it does not run properly. I found out that now when I add the acoustic part in the instance and run the analysis, even if I do not connect it by tie or anything, Abaqus is changing the element numbers and giving unique element numbers. I read about this in Users guide 4.1.1 output section too.
 My pressure file has the element numbers from the .inp file on the surface but in the new model it renumbers it from 190000 or something. I tried to renumber the acoustic elements and nodes but no change. How can I stop this from happening so that when I call NOEL in DLOAD it is possible to map the pressure on the correct NOEL as per .inp file?
 

 Arun
Reply | Threaded
Open this post in threaded view
|

RE: RE: Re: [Abaqus] DLOAD subroutine not working on parallel mode Yahoo / Sent

David Lindeman
All nodes and elements are renumbered during the solution phase (since unique reference numbers are needed).  The mapping between these “globall” numbers and the “local” (i.e.,  part instance name and node number) can be obtained using the GETPARTINFO/VGETPARTINFO and GETINTERNAL/VGETINTERNAL utility routines.

Regards,

Dave Lindeman
Staff Scientist
Corporate Research Systems Laboratory
3M Center 235-3G-08
St. Paul, MN 55144
651-733-6383

From: [hidden email] <[hidden email]>
Sent: Thursday, May 09, 2019 4:07 AM
To: [hidden email]
Subject: [EXTERNAL] RE: Re: [Abaqus] DLOAD subroutine not working on parallel mode Yahoo / Sent



Hi,

I have a small issue and since its connected with subroutine I would like some help. In my model I have now added acoustic element as a separate part and created an instance from the meshed part. When I run the same DLOAD subroutine it does not run properly. I found out that now when I add the acoustic part in the instance and run the analysis, even if I do not connect it by tie or anything, Abaqus is changing the element numbers and giving unique element numbers. I read about this in Users guide 4.1.1 output section too.
My pressure file has the element numbers from the .inp file on the surface but in the new model it renumbers it from 190000 or something. I tried to renumber the acoustic elements and nodes but no change. How can I stop this from happening so that when I call NOEL in DLOAD it is possible to map the pressure on the correct NOEL as per .inp file?

Arun





________________________________



3M Notice: This communication is from an [EXTERNAL] sender.
If this email looks suspicious, do NOT click or open any links or attachments in the email. To report a suspicious email, click on the Report Phishing – PhishMe icon in the Outlook ribbon or forward this email using the report email as spam link in the text below.

Click here<https://spam.mmm.com/pem/pages/digestProcess/digestProcess.jsf?content=aedaaa864ecbae9470642dc0e6829f2773c0e0e6ab7740fb70e80d0c9c36d891c6c4ff5fb22097b592aab29dac82584f30f60c161a797f8d8d8a621478a2f25250f75f3a4b570e12125b08ef0dd8e5d1dbc7a045714a146255659761471276b9c37b570f20a98a734732948327931bff0875daa5917e63d486fb3ae9624af57a17d051a1b719785d> to report this email as spam





[Non-text portions of this message have been removed]