Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Tiered Ingest allows you to group all of the files corresponding to an asset's datastreams (with the exception of RELS-EXT) into a sub-directory.

Use Case:  As a DAMS user, I need to ingest a set of digital objects (including archival files, publication files, other derivatives created outside of Islandora) as one Fedora asset with multiple datastreams so that my workflow is streamlined and related objects are stored together in one place.


General information for batch ingest

The batch ingest process runs continuously, looking for newly queued batch jobs approximately every 5 minutes. You can add batch ingest jobs to the queue at any time.

Batch jobs are subject to the following batch job size and file size limitations:

  • max. 100GB/batch job
  • max. 10GB/file

Step 1: Stage files for batch ingest job

Organise files in a batch job folder, using subfolders if appropriate. Refer to the instructions/options listed below for preparing batch jobs.


{ "errors": [ { "status": 404, "code": "NOT_FOUND", "title": "Not Found", "detail": null } ] }

If you're experiencing issues please see our Troubleshooting Guide.


The tiered ingest batch module uses filenames to identify the files that correspond to specific datastreams. All of the files you are ingesting as one asset should go in one directory, a sub-directory of the path you identify in the queue form. Each sub-directory corresponds to one asset and must have at least a file for the "key datastreams" (datastreams.txt). This file will list the datastream ID and corresponding filename, for instance the MODS datastream (MODS.xml), OBJ datastream (ex: filename.tif for large image), or other datastreams with derivatives. 

In order for the script to know what the datastreams to be ingested are we need a "manifest" to be included with the queued batch.

datastreams.txt
OBJ==primaryfile.ext
MODS==metadata.xml
# optional, if no MODS file is included, minimal metadata is automatically generated during ingest
PDF==custom.pdf
# optional
ARCHIVAL_FILE==originalversionof_primaryfile.ext
# optional, use for archival file (e.g. uncropped scan)
COMPONENT1==componentfile1.ext
# optional, can for instance be used in cases where a primary image is stitched from multiple component images; increment for additional files in same directory
# DO NOT use for complex objects that can be modeled as paged content or Islandora component assets!
MEDIAPHOTOGRAPH1==anymediaphotographfile.ext 
# optional, can be used for images documenting physical media, cases, covers, etc.; increment for additional files in same directory
DERIVATIVE1==anyarbitraryderivativefile.ext 
# optional, use for derivative files with direct descendant relationship from file designated OBJ; increment for additional in same directory
# CAUTION, do not duplicate derivative files that are automatically generated by the DAMS


OBJ==primaryfile.ext [designation of primary file is at digital stewardship staff discretion, in consultation with requesting content holder]

DERIVATIVE1==anyarbitraryderivativefile.ext [use for derivative file with direct descendant relationship from file designated OBJ; increment for additional in same directory]

COMPONENT1==anyarbitrarycomponentfile.ext [use for cases such as a file comprising one piece of a stitched OBJ or one page image in a pdf OBJ; increment for additional in same directory]

MEDIAPHOTOGRAPH1==anymediaphotographfile.ext [use for images documenting physical media, cases, covers, etc.; increment for additional in same directory]

MODS==metadata.xml   [use for optional included metadata file, if not included then very minimal mods will be added]

Notes:  

  • [text] should not be included in datastreams.txt file, used above for explanatory purposes only.
  • Additions beyond the standard datastream IDs shown above are allowed.  Consult with DAMS Management Team for recommendations. 


Example Ingest:

User1 in Architecture has a collection and needs to ingest their media with extra datastreams

they use ftp to upload their files to the server in a directory called batch1


fill out form as follows:

>>> Architectural Collections

Enter identifer of sub-collection that will contain your batch of assets >>> utlarch:5a4f464a-b4d5-4dd7-b2c2-4562643ac1bd
  >>> batch1

sample directory structure:


eid1234_example-batch-submission/ (batch job folder)
├── asset1/
│   ├── datastreams.txt
│   ├── modsfile.xml
│	├── primaryfile.tif
│	├── anyarbitraryderivativefile.ext
│	├── anyarbitrarycomponentfile.ext
│   └── anymediaphotographfile.ext
├── asset2_audio_example/
│   ├── datastreams.txt
│   ├── modsfile.xml
│   ├── audiofile.wav
│   ├── derivative_audiofile_for_streaming.mp4 (e.g. for creating PROXY_MP4 datastream, which is required for streaming audio)
│   └── audio_transcript.txt
└──	asset3_video_example/
    ├── datastreams.txt
    ├── modsfile.xml
    ├── videofile.mp4
    ├── video_captions.vtt
    └── video_transcript.txt
    └── page02_custom_ocr.txt

Notes:

  • set1 & set2 as shown above would be under the batch directory and each set represents an individual asset with its datastreams
  • batch can be just one set but would still need the extra nesting
  • there is no upper limit on number of sets/objects or filesize
  • No labels