Knowlg Jobs
This page lists the jobs that belong to Sunbird Knowlg.
Job handles image and video media files that are part of the uploaded/created contents. Whenever an asset/media file is uploaded as part of the content, an event is generated and inserted to kafka for triggering 'asset-enrichment' job.
- Image Enrichment: As part of image media file enrichment, image resizing with optimal DPI is done. 3 variants (low, medium and high resolution) of the image is generated and stored in cloud.
- Video Enrichment: As part of video media file enrichment, fetching of video metadata, generation of a thumbnail for the video and triggering video-stream-generator job for generation of streamable source is done.

Job uses the neo4j mutation data to generate AUDIT events of the knowlg objects as per Sunbird Telemetry spec which will be consumed by data analytics jobs.

Job uses the neo4j mutation data to index transactions for audit purpose. old and new values of the updated object in each neo4j transaction will be audited.

Job is used for publishing content/collection objects that are submitted for publishing. Job takes care of following functions:
- Downloading of media files and packaging them as ECAR for offline consumption.
- Publishing the updated/edited collection hierarchy data.
- Updating content/collection object node metadata with updated publish information viz ECAR paths, versionKey, pkgVersion, streamingUrl, status, etc.
- Indexing collection leaf node objects to Elastic Search.
- Clearing cached note data from Redis.
- Output topics for post-publish processing, video stream generation (if streaming is enabled and the content is of streamable mimeType) and mvc indexing (performed by non Knowlg Job).

Job is used for trigerring post publish activities when a collection is published. Like,
- Shallow Copy: Re-publishing herarchy information of shallow copy type of collections when an origin collection is published.
- Default DIAL code generation: Reserves DIAL Code, linking and QR Code image generation for reserved DIAL code by default for a 'Course' primaryCategory object.
- Course Batch Creation: Based on 'traceability' configuration, triggering auto batch creation for a 'Course' primaryCategory object if there is no running batch existing.
- DIAL Code Context Update: Triggers individual context update events for newly added dial codes and removed dial codes of a published collection/content.

Note: Side output topic 'publish.topic = {{ env_name }}.learning.job.request' which is currently serviced by 'publish-pipeline' Samza job will be updated to 'publish.topic = {{ env_name }}.publish.job.request' to be serviced by 'content-publish' Flink job.
All samza-jobs have been converted to flink jobs from release-5.0.0. From release-5.0.0, there will be no development/enhancement on samza-jobs.
Job is used for generating QR Code images for the reserved DIAL codes of a collection using the process Id generated when the DIAL code reserve API is invoked.

Job uses neo4j transactions to index the objects' metadata into Composite search index, DIAL code index and DIAL code metrics index in Elaticsearch.

Job is used to generate streaming media of the uploaded video contents (mp4 and webm mimeTypes).

Job is used to update context information for a linked/de-linked dial codes of a content/collection.

Note: Some of the jobs were part of Samza jobs before release-4.8.0. Since release-4.8.0, all knowlg jobs are part of Flink jobs.
Job is used to replace strings in the data of a column in the cassandra database. This can be used for partial String/Blob column data modification only. Configured key Strings will be replaced with the respective value strings where-ever found in the data of each row of the configured keyspace-table-column.

Job is used to replace cloud service provider references in the neo4j and cassandra data of assets/contents/collections to CNAME prefix variable value OR to the new cloud service provider path. Job is triggered for each asset/content/collection in Draft/Live/Image nodes from Sync tool available as part of Jenkins job. Migrated contents are stamped with migrationVersion configured as part of the job.

Job is used to republish already live contents. This job will refer to the data of live node only and not to the data of image node if exists. Please note that image node if exists, will not be deleted from the neo4j.

This job is exact replica of the 'video-stream-generator' job. This was created to monitor/control the infra requirement needed for generation of new streaming URLs as part of new cloud service provider migration.
.png?alt=media&token=e761e9d8-0b71-4a0b-bcf8-6d8c53c7e9b2)
Last modified 3mo ago