We are having challenges with the amount of time it takes to transcode content. We are not maxing out CPU or RAM on the EC2 instances we have Kaltura hosted on. We would like suggestions on how to speed up transcoding in Kaltura or to find out a way to load already transcoded content. Any help would be greatly appreciated.
Hi @steve_corey ,
See my response here: Ffmpeg cpu utilization - #2 by jess
Please note that it references Chef recipes which are no longer maintained but you could easily apply the same principles in your own deployment system (be it Chef, Ansible or something else).
You can also create a conversion profile that only includes the source flavour and ingest your media sources using that one (either via the bulk upload mechanism or by invoking the needed API calls directly - see https://developer.kaltura.com/workflows/Ingest_and_Upload_Media/Uploading_Media_Files, when invoking media.add()
[step 3], you just need to pass conversionProfileId=$YOUR_SOURCE_ONLY_PROFILE
).
Then, you can add whatever flavours you’d like [if you have existing renditions that match the current flavour specifications] or, you could create custom flavours based on your currently available renditions. This can all be done using the flavorparams
and flavorasset
services. Please note that this requires some transcoding knowledge, however.
Jess,
Hope you are well.
I appreciate your help so much. We are going to evaluate the approaches but this is just the direction we needed.
All the best,
Steve Corey
Hi Steve,
I’m doing fine, thanks. I trust you are too?
And you’re welcome. Glad it helps.
Jess,
The developer working this has some questions I was wondering if you could help with;
1, Do we need to backup the batch.ini file only or the whole directory of configurations. (if revert needed)?
-
Is there any dependency of the batch.ini file which we should lookout for after the changes are configured?
-
In the batch.ini file, below are the various entries regarding enabledWorkers.KAsyncConvert. Should we also make any changes to these?
enabledWorkers.KAsyncConvert = 1
enabledWorkers.KAsyncConvertLiveSegment = 1
enabledWorkers.KAsyncConvertThumbAssetsGenerator = 1
enabledWorkers.KAsyncConvertCloser = 1
enabledWorkers.KAsyncConvertProfileCloser = 1
-
Please help us understand the value which we should increment the enabledWorkers.KAsyncConvert value (currently = 1)
-
Should we also increment the server resources like CPU and Memory?
-
How can we test the changes are successful or not?
Jess,
Any help would be greatly appreciated. We would like to make changes as soon as we can but want to be more certain about how we proceed. Sorry for the detailed questions but we really need the help.
Steve
Hello,
Let me give my 5 cents of advise. We setup a relatively large Kaltura cluster hosting around 120,000 videos. My customer uploaded about 6-10 hours of video content every day.
For coping with that amount of content, we setup two bare metal servers (8 cores each) and managed to get a transcoding time about half of the total play time.
We also implemented an on premises mezzanine server that was very useful to us.
Users only needed to copy their files on their correponding folder and via a shell inotify script we created the related xml and pre-transcoded to a constant 1080p mp4, ready to be uploaded to a Kaltura dropfolder.
From my experience, you need CPU+IO for ffmpeg but not that much memory.
This setup worked really well and helped us save on BW and hosting CPU.
Regards,
David
David,
Thank you. We are loading 2000 videos in a day and have two Kaltura servers of high power at AWS. We really need to get more queues going or we need to do transcoding outside of Kaltura using AWS transcoding services and the Kaltura Bulk Uploader. The problem I have is my developers limited experience implementing new features in Kaltura.
Steve
Steve,
What source quality are you using right now ? The main problem is not the amount of videos but the total play time that adds up for transcoding.
May be implementing a mezzanine will help you a lot since you can queue your files locally and upload them almost optimized.
I have worked with broadcasters and a mezzanine server is absolutely essential for them.
David
Hi @steve_corey ,
1, Do we need to backup the batch.ini file only or the whole directory of configurations. (if revert needed)?
Generally speaking, it’s always good to backup all configuration files. These are relatively small (it’s mostly INI files, though we also store Apache and Sphinx configs there.
In terms of this particular change (increasing the number of conversion workers), only batch.ini should be modified.
- Is there any dependency of the batch.ini file which we should lookout for after the changes are configured?
Not in this particular case.
- In the batch.ini file, below are the various entries regarding enabledWorkers.KAsyncConvert. Should we also make any changes to these?
enabledWorkers.KAsyncConvert = 1
enabledWorkers.KAsyncConvertLiveSegment = 1
enabledWorkers.KAsyncConvertThumbAssetsGenerator = 1
enabledWorkers.KAsyncConvertCloser = 1
enabledWorkers.KAsyncConvertProfileCloser = 1
You can increase the amount of workers for any given task (while taking under consideration the amount of HW resources available to you) but the heavy operation here is the conversion itself (KAsyncConvert
).
- Please help us understand the value which we should increment the enabledWorkers.KAsyncConvert value (currently = 1)
Once again, that depends on the amount of HW resources at hand. Basically this directive determines the number of workers the system should launch concurrently (to wit: at any given time).
- Should we also increment the server resources like CPU and Memory?
The more CPU resources available, the more transcoding jobs you’ll be able to run concurrently.
- How can we test the changes are successful or not?
Ingest several new sources in concurrently and monitor the launched batch process and their respective logs (/opt/kaltura/log/batch/convert*$TIMESTAMP*log).
Cheers,
Once again.,you are the best. Thank you so much!