//example of RequireJS configuration var tests = []; for (var file in window.__karma__.files.hasOwnProperty(file)){ if(/.*specs\.js$/.test(file)){ tests.push(file); } }
requirejs.config({ //karma serves files from `base` baseUrl: '/base/src/.tmp/merge/js', paths:{ 'jasmine-jquery': '/base/src/test/lib/required/jasmine-jquery', 'polyfill': 'core/polyfill-launcher', 'jquery': 'vendor/jquery' }, shim:{ jquery:{ exports: '$' } }, //ask Require.js to load these files (all our tests) deps: tests, // starts test run, once Require.js is done callback: window.__karma__.start });
describe('Ammend role modal', () => { // Functions with have async logic but do not return the promise // can only be tested for their spies functions to be called it ('validateAmmendedRoles should have been called', () =>{ rolesServiceSpy.validateRoles.and.return.value(Promise.resolve([])); await modal.validateAmmendedRoles(); expect(uibModalInstanceSpy.close()).toHaveBeenCalled(); }); });
Set BPMN 2.0 process definition into the src/main/resources/processes folder. All processes placed here will automatically be deployed (ie. parsed and made to be executable) to the Activiti engine.
Step 1: create the process: Let’s keep things simple to start, and create a CommandLineRunner that will be executed when the app boots up:
So what’s happening here is that we have created a map of all the variables needed to run the process and pass it when starting process. If you’d check the process definition you’ll see we reference those variables using ${variableName} in many places (e.g. the task description).
Configuration The REST API is secured by basic auth, and won’t have any users by default. So we will aadd an admin user to the system as shown below (add this to the MyApp class). ⚠️ Don’t do this in a production system of course!
This adds a Spring Boot actuator endpoint for Activiti. If we restart the application, and hit http://localhost:8080/activiti/, we get some basic stats about our processes. With some imagination that in a live system you’ve got many more process definitions deployed and executing, you can see how this is useful.
The same actuator is also registered as a JMX bean exposing similar information.
To finish our coding, we will create a dedicated REST endpoint for our hire process, that could be consumed by for example a javascript web application. So most likely, we’ll have a form for the applicant to fill in the details we’ve been passing programmatically above. And while we’re at it, let’s store the applicant information as a JPA entity. In that case, the data won’t be stored in Activiti anymore, but in a separate table and referenced by Activiti when needed.
We’ll also need a Repository for this Entity (put this in a separate file or also in MyApp). No need for any methods, the Repository magic from Spring will generate the methods we need for us.
We can now go through our process. You could create a custom endpoints for this too, exposing different task queries with different forms. Let’s see which task the process instance currently is at:
So, our process is now at the Telephone interview. In a realistic application, there would be a task list and a form that could be filled in to complete this task. Let’s complete this task (we have to set the telephoneInterviewOutcome variable as the exclusive gateway uses it to route the execution): (When we get the tasks again now, the process instance will have moved on to the two tasks in parallel in the subprocess (big rectangle):)
// First, the 'phone interview' should be active Tasktask= taskService.createTaskQuery() .processInstanceId(processInstance.getId()) .taskCandidateGroup("dev-managers") .singleResult(); Assert.assertEquals("Telephone interview", task.getName());
// Completing the phone interview with success should trigger two new tasks Map<String, Object> taskVariables = newHashMap<String, Object>(); taskVariables.put("telephoneInterviewOutcome", true); taskService.complete(task.getId(), taskVariables);
// Completing both should wrap up the subprocess, send out the 'welcome mail' // and end the process instance taskVariables = newHashMap<String, Object>(); taskVariables.put("techOk", true); taskService.complete(tasks.get(0).getId(), taskVariables);
publicclassTenMinuteTutorial { publicstaticvoidmain(String[] args) { // Create Activiti process engine ProcessEngineprocessEngine= ProcessEngineConfiguration .createStandaloneProcessEngineConfiguration() .buildProcessEngine(); // Get Activiti services RepositoryServicerepositoryService= processEngine.getRepositoryService(); RuntimeServiceruntimeService= processEngine.getRuntimeService(); // Deploy the process definition repositoryService.createDeployment() .addClasspathResource("FinancialReportProcess.bpmn20.xml") .deploy(); // Start a process instance StringprocId= runtimeService.startProcessInstanceByKey("financialReport") .getId(); // Get the first task TaskServicetaskService= processEngine.getTaskService(); List<Task> tasks = taskService.createTaskQuery() .taskCandidateGroup("accountancy").list(); for (Task task : tasks) { System.out.println("Following task is available for accountancy group: " + task.getName()); // claim it taskService.claim(task.getId(), "fozzie"); } // Verify Fozzie can now retrieve the task tasks = taskService.createTaskQuery().taskAssignee("fozzie").list(); for (Task task : tasks) { System.out.println("Task for fozzie: " + task.getName()); // Complete the task taskService.complete(task.getId()); } System.out.println("Number of tasks for fozzie: " + taskService.createTaskQuery().taskAssignee("fozzie").count()); // Retrieve and claim the second task tasks = taskService.createTaskQuery().taskCandidateGroup("management").list(); for (Task task : tasks) { System.out.println("Following task is available for management group: " + task.getName()); taskService.claim(task.getId(), "kermit"); } // Completing the second task ends the process for (Task task : tasks) { taskService.complete(task.getId()); } // verify that the process is actually finished HistoryServicehistoryService= processEngine.getHistoryService(); HistoricProcessInstancehistoricProcessInstance= historyService.createHistoricProcessInstanceQuery() .processInstanceId(procId).singleResult(); System.out.println("Process instance end time: " + historicProcessInstance.getEndTime()); } } }
graph LR;
A(start);
B[order pizza];
C[bake pizza];
D[eat pizza];
E(finish)
A --> B;
B --> C;
C --> D;
D --> E;
Alfresco Activiti
Open source workflow in Java
engine
database
REST API
Spring framework (JPA, CDI, LDAP, JMX)
Process overview
All this can be called by REST
graph LR;
A[Model xml];
B[Process definition structure];
C[Process instance];
A --> |Deploy in Activiti| B
B --> |Triggers and user actionsi| C
Designing a model (xml)
Elements
Start events
Boundary events
Intermediate events
End events
Subprocess (normal line box)
Event subprocess (not continuous line box)
Call activiti (bold line box): may create new instance -> external call
Tasks (work units)
Gateways
Sequence flow
Events
Blank: API call
Timer (clock)
Message (postcardl) : local
*Signal (triangle):global
Cancel (cross)
Error (lightning): in error no case, not an exception, the right idea is to catch the exception and launch this event
Tasks
User task (you can specif the user role)
Service task (my Java code, call SOAP)
Script task (launch a script, js or groovy, internal logic)
There are also BusinessRuleTask, ReceiveTask, ManualTask, MailTask, CamelTask, MuleTask
Gateways
Exclusive (EX-OR, only one line on split, only waits for one)
Parallel (AND, several lines on split, waits for everyone to finish)
Inclusive (EX-AND, wait to continue)
Event-based - generates new process instance
Sequence flow
Normal
Default (crossed arrow)
Message flow (discontinuous)
Pools and lanes (e.g. vendor, client) -> to optimize
Best practices
Unique names
Avoid crossing flows
Modularize models
Naming conventions (verb + object)
Use comments
Avoid deadlocks and multimerges
Split flow with gateways
Avoid split and join at the same point
Avoid splitting tasks after events: get the result first
Void ecursive subprocess (beware infinite loops)
Consistent usage of Start and En events (only one start point)
❗ Note: an actor may have several roles (e.g. amazon has a picker and a packager in its storage place, but the same person can pick and package). Hence you can have a pool with 2 lanes.
Comunication with Activiti
Go to activiti backend and extend (bad!)
Use REST (good)
REST
Model
Deployments
Process definition
Process instances
Executions
Tasks
Froms
History
DB Tables
Jobs
Users and groups ⚠️ Add \service as a prefix to every REST call on the Activiti guide, and use camel case names for ids, so it’s directly mapped to Java.
The Activiti database
Tables:
ACT_AVT: events
ACT_GE_XXX: binaries, don’t touch
ACT_HI_XXX: history (read only, or that’s supposed. Don’t touch!!)
ACT_ID_XXX: users and groups
ACT_RU_XXX: runtime
IDENTITY_LINK: task X is assigned by user Y
EVENT_SUBSCR: event subscriber, listeners: task X will be done in 60 minutes
EXECUTION: be carefull with joins.
JOB: planned tasks (QuartzScheduler will execute a task) -> Quartz’s queue
TASK: they are timed
ACT_PROCDEF_INFO: the xml model parsed is stored here
⚠️ Don’t modify the schema! Upgrades can be horrible. * Set associations in a different database. * Be careful to be consistent. * Activiti should be the master (don’t overwrite it with the associated DB data)
UIs
Activiti explorer It’s heavy on PROD. You only deploy this on demand. You usually connect to the database.
UI elements
Query
DB
Deployments: be carefull as it does “waterfall deletions”: delete a deployemnt and you’ll phisically destroy everything
Process
Instances (check active instances)
Process definitions (models) -> tenant: similar to environment, but you can only use it with REST
Model workspace : you can edit and deploy from here
⚠️ Tenants can only be modified via REST API by adding an input form and changing the value for “script” ⚠️ Use this app only for queries and very carefully, it’s easy to end in accidental cleansing fire. ⚠️ Activiti usually doens’t send back an answer after put (maybe 204). You should do a get after that. ⚠️ Error code 409 notifies conlicts (e.g. problems with concurrency and exclusion -> someone else already did that, so you can remove it from your cache)
Most of the work we do programming is automatizing tedious or repetive tasks. Then, the next obvious step would be a way to plan when should they start, or how to make them stop. For this task we use “Schedulers”. We can use the basic Java TimerTasks, but that would be a huge effort as it is a very basic tool, and we may need something more advanced. One of the best advantages of using a framework is that we have tested scalable solutions which may be easily adapted to our need, hence, meet Quartz, a free open source library available under Apache 2.0 licence. The idea is simple: we want to execute taks: which we will wrap in an extension of the class “Job”. Those Jobs will be handled in a main class which contains an Scheduler instance, which will check when it’s the right time to run them, and then it will execute those taks.
Process
1. Setting up the Maven dependencies
The first step is getting the testing dependencies into our project, and that’s something we can do via Maven by adding them to the pom.xml file. ❕These were the stable versions when the post was originally written
2.1. The scheduler itself The Schedulers instances are created by the library factories. The jobs can be scheduled anytime, but none will run unless we call the start method.
1 2 3 4 5
// Grab the Scheduler instance from the Factory Schedulerscheduler= StdSchedulerFactory.getDefaultScheduler();
// and start it off scheduler.start();
2.2. Job The most basic execution unit. This will wrap thee repetitive task you’ll need to plan. The method execute is the most important part, as it is the “main” which will run. Let’s see a useful example, we are going to prepare a task which will check what is already planned, and reload the time of execution.
2.3 JobDetail The details to run them: what (Job class -> try to think of this as a facade for the real task) and when (Trigger). If there are multiple parameters to pass, it may be good to use a DataMap to handle them:
1 2 3 4 5 6 7 8 9 10 11
publicclassTestJobimplementsJob { publicvoidexecute(JobExecutionContext context)throws JobExecutionException { JobKeykey= context.getJobDetail().getKey(); JobDataMapdataMap= context.getJobDetail().getJobDataMap(); StringjobSays= dataMap.getString("jobSays"); floatmyValue= dataMap.getFloat("myValue"); System.out.println("Instance " + key + " of TestJob says: " + jobSays + ", and value is: " + myValue); } }
2.4. Basic triggers
And then schedule those jobs with triggers that define at what time(s) the job should run.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
// define the job and tie it to our MyJob class JobDetailjob= newJob(MyJob.class) .withIdentity("job1", "group1") .build(); // Trigger the job to run now, and then repeat every 40 seconds Triggertrigger= newTrigger() .withIdentity("trigger1", "group1") .startNow() .withSchedule(simpleSchedule() .withIntervalInSeconds(40) .repeatForever()) .build(); // Tell quartz to schedule the job using our trigger scheduler.scheduleJob(job, trigger);
2.5. CronTrigger
The class which handles the syntax to specify when to run a planned job. Its syntax basics are:
Seconds (0–59)
Minutes (0–59)
Hours (0–23)
Day-of-Month (1–31)
Month (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DEC)
Day-of-Week (SUN, MON, TUE, WED, THU, FRI, SAT)
Year (optional field)
A few more complex examples:
0 0/5 * * * ? → every 5 minutes
10 0/5 * * * ? → every 5 minutes, at 10 seconds after the minute (10:00:10 am, 10:05:10 am, …)
0 30 10–13 ? * WED,FRI → at 10:30, 11:30, 12:30, and 13:30, on every Wednesday and Friday
3. Basic architecture
The best way to understand how does this work is creating a quick example:
3.1.- Scheduler prototype
Enumerator of the jobs to run: it has the full list of jobs, so every new job must be specified here.