I have a situation where i need to run five different child jobs in talend in parallel. Problem is that, in my select query i would be getting five different ID’s and then for each particular id , i need to run five different jobs. Problem with tparrallelize component is that , it does not allow me to pass context variables to each sub job, i.e id in this particular case.
select id from table limit 5; —-> five different instance of same job with different id as parameter
Any help would be highly appreciated
Front End for Running Talend Jobs
I am looking for a front end for our operator to run our Talend jobs. We do not want him to have the ability to delete or modify jobs. Only to run them and monitor their results. Any suggestions for t
Running parallel jobs in Jenkins
I’m using Jenkins for my builds, and I wrote some test scripts that I need to run after the compilation of the build. I want to save some time, so I have to run the test scripts parallel. How can I do
Running multiple background parallel jobs with Rails
On my Ruby on Rails application I need to execute 50 background jobs in parallel. Each job creates a TCP connection to a different server, fecths some data and updates an active record object. I know
Running parallel jobs on multicore using GNU parallel
I need to run multiple jobs on a multicore (and multithreaded) machine. I am using the GNU Parallel utility to distribute jobs across the cores to speed up the task. The commands to be executed are av
parallel running of jobs in unix
I have following jobs (just as example) to run in unix (bash shell) cluster computers: ### job1 mkdir file01 cp *.map flex01 date > out cd .. ### job2 mkdir file02 cp *.map flex02 date > out cd
Running Cron jobs in parallel (PHP)
In the past, I ran a bunch of scripts each as a separate cron job. Now I’d like to run a controller script with one cron job, then have that call the scripts separately (and in parallel, all at the sa
Running multiple serially dependent jobs in parallel
I am running some CFD-simulations on a PBS based cluster. I will run a large number of cases, and therefore want to do the pre-processing on the cluster nodes. I need to do two steps, first meshing, a
parallel execution of mapreduce jobs
I have a question. can I execute two or more jobs in hadoop concurrently with JobCntrol? I have 2 jobs that can be execute in parallel and other jobs have dependencies with these two jobs? How can I d
How can I stop gnu parallel jobs when any one of them terminates?
Suppose I am running N jobs with the following gnu parallel command: seq $N | parallel -j 0 –progress ./job.sh How can I invoke parallel to kill all running jobs and accept no more as soon as any on
Is there any way to schedule non hadoop jobs in Talend
Can anyone suggest some method to schedule non hadoop jobs in Talend OpenStudio for Big Data. I have seen a scheduler using oozie, but it will work for hadoop related jobs only.
I’m not sure if I properly understand what you’re doing here but if you were to break out each of those IDs and store them as 5 separate context variables then each job could access their own context variable with the right ID stored for each of them and use that.
So I would start with your database input component (just select the IDs you want) and feed that into a tFlowToIterate. Connect this via an iterate flow into a tFixedFlowInput component and create 2 fields in your schema, “key” and “value”. Use the inline table to specify that “key” should be ((Integer)globalMap.get(“tFlowToIterate_1_CURRENT_ITERATION”)) and “value” should be ((String)globalMap.get(“row1.SupplierPartNumber”)).
I’d then throw this into a tMap component where I’d put “ContextNumber” + row2.key into the mapped key column just to make it a bit more obvious than the iteration number as your context and then feed that directly into a tContextLoad.
From there you can OnSubjobOK to your tParallelize component and link all your jobs together. In each job configure the jobs to use the appropriate context variable.