Batch Jobs Not Working and Admin Console returning Error -1

Hello, since I tried to upload files with a CSV file I started to have multiples issues on KMC and the Admin Console.

Now I’m not able to perform any batch operation including batch actions from KMC UI (when I do that KMC stops responding) and when I tried to access Admin Console I’m receiving the following message:

An error occurred (error code: API:-1)

Also, when running /opt/kaltura/bin/, I get:

[Space on /opt/kaltura] [FAILED, RC: 1] - [.004353692]

On the other hand, the playback of the existing media files is working ok and also the API access is working properly.

Thanks in advance.

Juan Pablo.

Hi @juanpanie,

Let’s start with the space issue.
Are you really out of space? what’s the output for:
# df -h

If so, obviously, this needs to be fixed:)

Next, in regards to Admin Console, this can SOMETIMES be because of browser cache, did you try a different browser? if it works from there, try clearing your cache, if it doesn’t, from a root shell on the server, run:

# source /etc/profile.d/kaltura*
# kaltlog

then make the login request and look at the errors outputted to STDOUT.

As for the bulk upload, first, make sure your batch daemon is running:
# service kaltura-batch status
if it is not, try to restart it and look for errors in /opt/kaltura/log/kaltura_batch.log.

Assuming the batch daemon is running, try to submit a bulk CSV file while running kaltlog and look for errors, in addition, open your browser’s dev console and check for errors and failing requests in both the “Console” and “Network” tabs.
You can also test the bulk upload functionality directly from the server by running this CLI script:

# /opt/kaltura/bin/upload_bulk.php <service_url> <partner id> <secret> <uploader> </path/to/csv> <bulkUploadXml.XML|bulkUploadCsv.CSV>

This will call the same APIs KMC does and may be easier to debug.

Thanks for the quick response Jess, on the first issue related with space… yes, I actually out of space, is there any way to store the files in a different disk?

Regarding the Admin Console, I still have the issues after cleaning the browser and also I have same issue with different browsers, I ran:

source /etc/profile.d/kaltura*


But I not receiving anything in the output when performing the login (maybe Im missing something?)

Regarding the batch daemons… the service is running, but I guess this was related with not having space on the disk. I removed some files directly from the server and K start downloading and conversing new files (even when I deleted the batch upload process job)

I will perform the test directly from the serves and I will let you know if I have new errors.

Thanks again!!

Juan Pablo.

First let’s solve the space issue:)
Start by running:
# df -h /opt/kaltura/*

My guess is /opt/kaltura/log will have a lot of old log files and archived log files which you can delete.
Also, is /opt/kaltura/web on a separate partition or on a NAS or is it all on the root partition? If it’s on the root partition [/] and you ran out of space, obviously that needs to be handled. You should add another drive or else, move it to another partition if it exists and has space or, best of all, move it to a dedicated machine and mount it over NFS on the Kaltura server.
Needless to say that for a cluster, the /opt/kaltura/web volume MUST be mounted on all front and batch nodes.

I removed some logs and videos and now the Admin console is responding again :slight_smile: and yes I have the /opt/kaltura/web in the root partition, but not sure how to change to a different one:

I have an additional partition that I want to use for the videos:

Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 8.5G 20G 30% /
/dev/sdb1 1.4T 70M 1.3T 1% /mnt

Can you help me with changing the /opt/kaltura/web to the partition /dev/sdb1?

BTW this is a all in one installation for Kaltura


Juan Pablo

Hi Juan Pablo,

Easiest way would be to move /opt/kaltura/web to /mnt/web and then create a symlink from /opt/kaltura/web pointing to /mnt/web. It is very very important that when you copy the files onto their new location, you preserve their original permissions [mode and ownership].

Hi Jess, sorry for the delay on this, I was facing some issues with the VM disks first (also with the end of the year celebrations :stuck_out_tongue: ) and now Im able to try the configurations, but also I will try to change my question… which is the best option for storing a big number of videos? adding more space to a OS disk, or we can use a S3 or Azure storage schema with Kaltura? I’ve checked this post: but it will be great to have your recommendation.

(I can put the question in a different thread if that helps)

Thanks in advance and happy new year! :slight_smile:

Hi @juanpanie

Happy new year to you as well:)
You can, and depending on your traffic and geo distribution, probably SHOULD export your media content onto a remote storage and then serve it from CDN. With Amazon, you can export to an S3 bucket and use CloudFront for serving the content.

Out of the box, Kaltura supports creating remote storage profiles that can export the content using S3, SFTP and FTP [though FTP is not recommended for several reasons]. I never used Azure for that but if it’s capable of using SFTP then it should also work out of the box.

Once that’s done, you also need to create a delivery profile so that the content is served from CDN.
You can configure the remote storage profile so that files exported to the remote storage are then deleted from the local content dir/NFS volume [/opt/kaltura/web/content] but be aware that if you do so, you will not be able to reconvert these source files or perform other operations like trimming and clipping.

Either way, you want to leave a healthy amount of free space on your Kaltura nodes root partition and the /opt/kaltura/web/content partition [which may or may not be the same thing] because the first step will ALWAYS be to upload the files to /opt/kaltura/web/content/uploads, then perform the transcoding operations to create the different flavours and place them under /opt/kaltura/web/content/entry only then can the media files be exported to the remote storage.

Hope that helps clarify things,

Hi Jess, I’ve been able to go trough the process to configure the remote storage in AWS S3 but im not able to see the files in my bucket and Im receiving a 404 when trying to reproduce the video on the preview player.

Delivery Profile is
Status: Active
ID: 1005
Type: HTTP
Streamer Type: HTTP

And the Remote Storage Profile is:
Status: Automatic
ID: 1
Name: Amazon S3
Protocol: Amazon S3
Publisher ID: 101
Path Manager: Kaltura Path
Trigger: Flavor Ready
Ready Behavior: No Effect

S3 Bucket Policy:

“Version”: “2012-10-17”,
“Statement”: [
“Sid”: “AddPerm”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “"
“Action”: [
“Resource”: "arn:aws:s3:::mmtmd/


and also tried with

“Principal”: {
“AWS”: “"


While doing the upload I get this on the kaltlog:

2018-01-09 21:52:24 [0.000115] [] [548696055] [59] [PS2] [kFileSyncUtils::getReadyFileSyncForKey] NOTICE: FileSync was not found
2018-01-09 21:52:24 [0.000273] [] [548696055] [60] [PS2] [kCoreException->__construct] ERR: exception 'kFileSyncException' with message 'no ready filesync on current DC' in /opt/kaltura/app/alpha/apps/kaltura/lib/myEntryUtils.class.php:845
Stack trace:
#0 /opt/kaltura/app/alpha/lib/model/entry.php(3515): myEntryUtils::resizeEntryImage(Object(entry), 0, 120, 90, 2, 'F7F7F7', NULL, 0, 0, 0, 0, 0, -1, '-1', '-1')
2018-01-09 21:52:24 [0.000092] [] [548696055] [68] [PS2] [kFileSyncUtils::getReadyFileSyncForKey] NOTICE: FileSync was not found
2018-01-09 21:52:24 [0.000111] [] [548696055] [69] [PS2] [entry->getLocalThumbFilePath] ERR: exception 'Exception' with message 'No ready fileSync found on any DC.' in /opt/kaltura/app/infra/log/KalturaLog.php:83
Stack trace:
#0 /opt/kaltura/app/alpha/lib/model/entry.php(3532): KalturaLog::err('No ready fileSy...')
#16 {main}
2018-01-09 21:52:24 [0.000163] [] [548696055] [70] [PS2] [KExternalErrors::dieError] ERR: exception 'Exception' with message 'exiting on error 10 - missing thumbnail fileSync for entry' in /opt/kaltura/app/infra/log/KalturaLog.php:83
Stack trace:
#0 /opt/kaltura/app/alpha/apps/kaltura/lib/KExternalErrors.class.php(136): KalturaLog::err('exiting on erro...')
            [tmp_name] => /opt/kaltura/web/content/uploads/phpNeWpe3
            [error] => 0
            [size] => 874507
            [tmp_name] => /opt/kaltura/web/content/uploads/phpNeWpe3
            [error] => 0
            [size] => 874507
==> /opt/kaltura/log/batch/extractmedia-0-2018-01-09.err.log <==
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 281
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 281
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 281
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 281
PHP Warning:  Division by zero in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 281
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Notice:  Trying to get property of non-object in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282
PHP Warning:  Division by zero in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 282

==> /opt/kaltura/log/batch/extractmedia-0-2018-01-09.err.log <==
PHP Warning:  Creating default object from empty value in /opt/kaltura/app/infra/media/mediaInfoParser/KMediaFileComplexity.php on line 246

==> /opt/kaltura/log/batch/storageexport-2-2018-01-09.log <==
2018-01-09 21:53:01 [0.646729] [78461873] [37] [BATCH] [s3Mgr->doPutFileHelper] ERR: exception 'Exception' with message 'error uploading file /opt/kaltura/web/content/entry/data/0/1/0_ej6wj27l_0_wnghzbyq_12.mp4 s3 info: Access Denied' in /opt/kaltura/app/infra/log/KalturaLog.php:83
Stack trace:
#0 /opt/kaltura/app/infra/storage/file_transfer_managers/s3Mgr.class.php(178): KalturaLog::err('error uploading...')
==> /opt/kaltura/log/batch/storageexport-2-2018-01-09.log <==
2018-01-09 21:53:01 [0.868523] [78461873] [40] [BATCH] [s3Mgr->doPutFileHelper] ERR: exception 'Exception' with message 'error uploading file /opt/kaltura/web/content/entry/data/0/1/0_ej6wj27l_0_wnghzbyq_12.mp4 s3 info: Access Denied' in /opt/kaltura/app/infra/log/KalturaLog.php:83
Stack trace:
#0 /opt/kaltura/app/infra/storage/file_transfer_managers/s3Mgr.class.php(178): KalturaLog::err('error uploading...')

Is clear for me that there is an issue with the access, but I tried with root credentials and user credentials and I get the same issue. Another thing here is that if I switch the Remote Storage Settings to First Kaltura, Im able to reproduce the video.

Do you have any tip for looking into S3 or the remote storage settings?

Thanks in advance.


Hi @juanpanie,

You need to check why the export to the S3 bucket is failing.
You can start by going to Admin Console->Batch Process Control and inputting an entry ID into the search box.
This will show the entire ingestion and export flow, including all the batch jobs that were attempted.
One of these will be “Storage Export”. You will get an indication of the job’s final status as well as some basic information about it. If you cannot determine what happened by looking there, you should check the storage export batch log[s] here:


Note that fatal errors are logged into a separate file so you should have at least two log files, though the storageexport-$TIMESTAMP.err.log file may be empty. Go over both logs and grep for the entry ID to follow the flow and locate possible failures.

Hi Jess, thanks for the guideline, logs files helped my to finde the issue that was relate on the way I was providing the S3 Path, now Kaltura is able to transfer the encoded videos to S3. The only thing missing here is how to get a playable (with an adaptable bit rate) link for the videos, is there any particular method on the API to the the playable url and not just the link to the file?

Also jumping to the original question here, I created a symlink on /opt/kaltura --> web to my /data1/kaltura/web with the same permissions an ownership and I just get an “Uploading” status for the file, and when I track the entry ID on the Kaltura Console, kaltura is not processing the file, also nothing weird on the kaltlog

I found an similar issue here:

I hope you can help me on this too :slight_smile:

Thanks in advance.

Juan Pablo.

Hi @juanpanie,

In regards to your first question, see my explanation about the playManifest API here:

As for your ingestion issue, need to check the logs for errors, I’m sure they are there:)
See Kaltura File not processing after moving to new db and storage for the logs you should look at as a starting point.
What exactly do you see about each job in the entry investigation view?

That was a quick response haha!! Thanks, I will take a look there :slight_smile:

Hi Jess, I’been working on this and now im pretty close to finish. Im able to put /opt/kaltura/web in a different disk (it was an issue with the permissions of tmp folder inside web), also S3 storage is working too, and Kaltura is moving the source and flavors to S3 as expected… and finally, when I want to play the file inside KMC I can see in my chrome console that the player is hitting the cloud front on AWS.

The only thing that I cannot resolve yet is that I was using a react player in my app (playing video outside Kaltura Player) that now is not working anymore with this configuration, based on your post for enforcing the delivery type, Im using this for my HLS player (react-hls){entryId}/flavorIds/{flavorsIds}/format/applehttp/protocol/http/a.m3u8 before using the remote storage the link was ok for playing the video inside react-hls, but now is failing. Is this something I can fix with the delivery profile?

Im working with this configuration:
Status: Active
ID: 1005

If I open the link http://mydomain/p/101/sp/10100/playManifest/entryId/0_wb7sd5qy/format/url/protocol/http the video is working (format: URL) on the browser but not with applehttp format, I will keep working on this, but if you can give me some light it will be great!

Thanks in advance.

Juan Pablo.

Hi Juan,

If it plays correctly with the Kaltura player, I see no reason for it not to play with any other player that can read a HLS manifest. I suppose you can start by testing it with and see if you get any errors…

Hi Jess, unfortunately is not working at all on M3U8 tab and not working on any tab when using applehttp format in the URL

on MP4 Player works only with this URL:,0_vl8ip5zg,0_1b8i236f,0_ypwi1gug,0_i16vvc30/format/url/protocol/http

But when trying to change to applehttp format I receive a 404 error on the console, I will try to scan with fiddler the video when playing from Kaltura player and try to get the HLS url, I guess I will find an url to my Kaltura Server and then a redirect to cloud front and the same each time the player request a different flavor right?

Thanks in advance.

Juan Pablo.

Hi Juan,

Is your system accessible from the outside world? If so, please provide a valid URL to a manifest and I’ll take a look.
Also, using a network sniffer, you can see the full playManifest request the Kaltura player issues when you hit play, this can help you as well as if my understanding is correct, you are able to use HLS when using the Kaltura player.

Hi Jess, yes, those are the links i get from the entry investigation in Kaltura Console:

www -
cdn -
raw -
Manifest -

Those links are working on the web browsers but not in hls player.

Before making the changes and start using S3 and Cloud Front I was able to reproduce the has video like this:,0_vl8ip5zg,0_1b8i236f,0_ypwi1gug,0_i16vvc30/format/applehttp/protocol/http/a.m3u8

But not anymore.

Different story if I use the mp4 player with url:,0_vl8ip5zg,0_1b8i236f,0_ypwi1gug,0_i16vvc30/format/url/protocol/http

But I will need HLS.

Maybe something wrong on the delivery profile configuration? or even on the Cloudfront setup?

Thanks again! :slight_smile:

Hi @juanpanie,

Lets start with:

mysql> select * from delivery_profile where id=$DELIVERY_PROFILE_ID\G
mysql> select * from storage_profile where id=$REMOTE_STORAGE_PROFILE_ID\G

You can get the IDs from the Admin Console or just select all the profiles for your partner and locate the correct one by its name.
Be sure to mask any secrets when posting here.

Also, when making the playManifest request, check /opt/kaltura/log/kaltura_prod.log, /opt/kaltura/log/kaltura_api_v3.log.

In /opt/kaltura/log/kaltura_prod.log, you will find the query that determines which delivery profile should be used, let’s see that it chooses the correct one.

Once collect this info, we can continue with the troubleshooting.


This is the storage profile:

id created_at updated_at partner_id name system_name desciption status protocol storage_url storage_base_dir storage_username storage_password storage_ftp_passive_mode delivery_http_base_url delivery_rmp_base_url delivery_iis_base_url min_file_size max_file_size flavor_params_ids max_concurrent_connections custom_data path_manager_class url_manager_class delivery_priority delivery_status
1 2018-01-09 17:04:12 2018-01-17 17:19:13 101 Amazon S3 2 6 /mmtmd/kaltura XXXXXXXXXXXXXXX xxxxxxxxxxxxxxxxxxxxx 0 NULL NULL NULL 0 0 0,1,2,3,4,5,6,7,8,32,33,34,35,36,37,39,40,41,100,101,102,103,104,105,106,107,108,109 NULL a:15:{s:7:“trigger”;i:3;s:14:“ready_behavior”;i:0;s:17:“allow_auto_delete”;b:0;s:20:“delivery_profile_ids”;a:2:{s:4:“http”;a:1:{i:0;i:1005;}s:3:“hls”;a:1:{i:0;i:1006;}}s:10:“privateKey”;s:0:"";s:9:“publicKey”;s:0:"";s:10:“passPhrase”;s:0:"";s:20:“should_export_thumbs”;b:0;s:22:“files_permission_in_s3”;s:11:“public-read”;s:8:“s3Region”;s:0:"";s:7:“sseType”;s:4:“None”;s:11:“sseKmsKeyId”;s:0:"";s:13:“signatureType”;s:2:“s3”;s:8:“endPoint”;s:0:"";s:19:“path_manager_params”;s:6:“a:0:{}”;} kExternalPathManager 0 1

And this is the delivery Profile:

id partner_id created_at updated_at name type system_name description url host_name recognizer tokenizer status media_protocols streamer_type is_default parent_id custom_data priority
1005 101 2018-01-09 18:20:29 2018-01-17 16:56:18 HTTP CF 4 NULL NULL NULL NULL 0 NULL http 0 0 a:1:{s:13:“rendererClass”;s:21:“kM3U8ManifestRenderer”;} 0

This is the manifest I get the following:

Just one file?, I was expecting different flavors, also, passing the manifest to the player is not working on Finally it call my attention the mimeType “video/x-flv” is this ok for HLS?

In /opt/kaltura/log/kaltura_prod im getting the following line:

2018-01-17 17:36:37 [0.000410] [] [9618105] [41] [PS2] [DeliveryProfilePeer::getRemoteDeliveryByStorageId] INFO: Delivery ID for storageId [1] ( PartnerId [101] ) and streamer type [http] is 1005

and this in /opt/kaltura/log/kaltura_api_v3.log

2018-01-17 17:35:10 [0.000620] [] [1721591570] [41] [PS2] [DeliveryProfilePeer::getRemoteDeliveryByStorageId] INFO: Delivery ID for storageId [1] ( PartnerId [101] ) and streamer type [http] is 1005
2018-01-17 17:35:10 [0.000749] [] [1721591570] [42] [PS2] [KExternalErrors::terminateDispatch] DEBUG: Dispatch took - 0.021059036254883 seconds, memory: 7077888
2018-01-17 17:36:37 [0.002446] [] [9618105] [1] [PS2] [sfContext->initialize] INFO: {sfContext} initialization
2018-01-17 17:36:37 [0.000215] [] [9618105] [2] [PS2] [sfController->initialize] INFO: {sfController} initialization
2018-01-17 17:36:37 [0.000212] [] [9618105] [3] [PS2] [sfRouting->parse] INFO: {sfRouting} match route [default] "/:module/:action/*"
2018-01-17 17:36:37 [0.000083] [] [9618105] [4] [PS2] [sfWebRequest->loadParameters] INFO: {sfRequest} request parameters array (  'module' => 'extwidget',  'action' => 'playManifest',  'entryId' => '0_0cmifwch',  'a' => 'a.f4m',)
2018-01-17 17:36:37 [0.000117] [] [9618105] [5] [PS2] [sfFrontWebController->dispatch] INFO: {sfController} dispatch request
2018-01-17 17:36:37 [0.000493] [] [9618105] [6] [PS2] [sfFilterChain->execute] INFO: {sfFilter} executing filter "sfRenderingFilter"