To successfully run a job, we recommend the following procedure:
Yes, really – do all of this every time you run a job until you’ve run the same kind of job multiple times. Even then, test any time you change the parameters for that job, and definitely for any new type of job you’re running. As we said earlier, even staff with extensive experience in running batch jobs still make errors, occasionally catastrophic ones.
For any large job (where large = more than a few hundred records, and definitely for 5,000 records or more), please use the LTS Support form to contact LTS for advice and assistance on planning and running your job.
Most of the available jobs were developed by Ex Libris to serve all of their customers’ needs. This means that they may not work exactly the way we at Harvard would expect them to, based on our needs.
In the Ex Libris documentation, there is an extensive table of jobs and their parameters, which include any notes or warnings about what those jobs will do and how Alma will treat records after the job is run. At Harvard, only some of these jobs have been rolled out for staff use. If there are jobs you require that are not included in the current list of LTS-approved jobs, let us know by sending a ticket to the Alma Support Center.
Only some jobs can be undone, and even then, the undo isn’t always perfect. We’ll come back to this in the troubleshooting section.
Once you understand the job you want to run, decide how to assemble the set you’re going to run it on. Most of the manual jobs in use at Harvard require a set of records to act on, so you need to create the set before you can run the job. All of the pre-work that’s a good idea for creating a set – determining the content type, the conditions for including records in a set, the search criteria and terms, etc. – is even more important if you’re going to run a job on it. Read more on creating and using sets.
After you create your set - especially if you create it from a file - spot-check to make sure all of the records you've included are supposed to be in there. If you have a large set, check a couple of records on the first page of members of the set, a couple on a middle page, and a couple on the last page; this way, you know you've truly checked a random sample.
It's important to be able to find your set quickly in the list. Use your initials at the beginning or end - as many as you need if you have common letters - and create a detailed, specific set name. If you're working on a project over time with multiple sets, use the same set name and dates or sequential numbers, such as:
To make checking your work easier later, you may want to Export your set after it's complete and before you run your job. This is particularly true for logical sets. This will save a snapshot of the set members, and you can compare it with an export after the job runs.
When you feel you have the right parameters and the right set criteria, make sure to test your job in the Sandbox before running it in Production. This step is crucial because we are still discovering the less-obvious ways that Alma works. The time spent testing is also much shorter than the amount of time it will take to fix any errors or restore withdrawn item records later.
Also, because you (and other people) are testing processes in the Sandbox, don't consider the records here to be a "backup" or "clean copy" of records in Production. These records may have had other processes or changes applied to them by other staff testing in the sandbox as well.
When you run your test, use a set with a very small number of records, 10 or fewer. This is both to help the job run quickly – so you can see the results sooner – and so you can individually check each record for how the update affected it.
Before you are given permission to run batch updates in Production, LTS requires at least one successful update in the Sandbox using the exact set criteria and parameters. After you have been given these permissions, please continue to contact LTS Support if you'd like to run an update on more than 5,000 records at once.
Reminder: Alma is a hosted service, which means that our data lives on servers managed by Ex Libris…along with data from hundreds of other institutions around the country. All of these institutions are running jobs, and each job requires resources from those servers. This means that jobs take time to process, and the bigger and more complex the job, the longer it takes. Very large indexing and publishing jobs happen overnight and over weekends in the US, to take advantage of lower usage.
Plan ahead for large updates, and do not rely on jobs to complete immediately if there’s a bigger job ahead of it in the queue.
See the Ex Libris page on Planning Batch Job Guidelines for an idea of how long things can take. https://knowledge.exlibrisgroup.com/Alma/Product_Documentation/010Alma_Online_Help_(English)/050Administration/070Managing_Jobs/010Overview_of_Jobs#Batch_Job_Planning_Guidelines