Modules :

Here you can find a brief overview of the pladipus modules that are included with the current pladipus installation.


Introduction

Pladipus has a set of sample modules that work out of the box. Each of the modules is based on an external project with a separate commandline or API. We will always try to adapt to the command line arguments as provided by the documentation of the external tool and build on top of that. This also means that each module can be run outside of the pladipus framework!

For further questions on a particular topic, feel free to contact the developers.


Usage through Java API

Here we discuss the basic usage of the pladipus API for the execution of runs. Note that this is mainly to be used as a means to develop custom processing steps and to test them locally, before deploying them into the pladipus framework.

A pladipus processing step object can be provided with a hashmap of string key-value pairs. Each step can access this hashmap, making it possible for programmers to tailor a step to their liking.

Below is an example of how a small single-job run can be constructed to execute a miniature pipeline consisting of SearchGUI and PeptideShaker using the search module.

1. Setting the parameters

Each processing step requires to have a map of parameters set. We can do this by generating a map for all steps and combining them in a job. This way, it is ensured that each step has access to the same parameters.

     HashMap<String, String> parameters = new HashMap<>();

        //searchGUI parameters
        parameters.put("spectrum_files", [INPUT MGF FILE]);
        parameters.put("fasta_file", [INPUT FASTA FILE]);
        parameters.put("id_params", [INPUT SEARCHGUI PARAMETERS FILE]);
        parameters.put("output_folder", [OUTPUT FOLDER);

        //turn off all search engines?
        parameters.put("xtandem", "1");
        parameters.put("msfg", "1");
        parameters.put("omssa", "0");
        parameters.put("ms_amanda", "0");
        parameters.put("myrimatch", "0");
        parameters.put("comet", "0");
        parameters.put("tide", "0");
        parameters.put("andromeda", "0");

        //peptideShaker parameters
        parameters.put("experiment", [EXPERIMENT_NAME]);
        parameters.put("out", [PEPTIDESHAKER_OUTPUT_FILE]);
        parameters.put("sample", [SAMPLE_NAME]);
        parameters.put("replicate", [REPLICATE]);

2. Constructing a processing job

A processing job is in essence a collection of processing steps. By providing the job with the map of parameters and your pladipus user, every step will automatically receive these parameters, but only if the parametermap was not set prior to adding the step to the job. A job can be executed on its own by iterating and calling the “doAction()” method for each processing step.

ProcessingJob job = new ProcessingJob(parameters, user);
        job.add(new SearchSetupStep());
        job.add(new SearchGUIStep());
        job.add(new PeptideShakerStep());

3. Setting up a run

Setting up a run is made easy by providing the constructor with a LinkedHashMap object that has an identifier for the job as key and a processing job object as value.

        LinkedHashMap<String, ProcessingJob> jobs = new LinkedHashMap<>();
        jobs.put("test_search", job);
        ProcessingRun run = new ProcessingRun(jobs, "My_First_Run");

4. Execution of a run

A run can be executed outside of the pladipus framework. This can be done by iterating all jobs and in turn iterating their processing steps.

   for (Map.Entry<String, ProcessingJob> aJob : run.getProcesses().entrySet()) {
            System.out.println("Running " + aJob.getKey());
            if (aJob.getValue().allowRun()) {
                for (ProcessingStep aStep : aJob.getValue()) {
                    System.out.println(aStep.getDescription());
                    aStep.doAction();
                }
            }
        }