How do you load balance the video packager and live stream?

We have the following setup, not working:

load balancer with SSL termination

public front
internal front

backend for apache front
backend for nginx

We got to the point where everything works except the most important; video playback. Videos can be uploaded, get converted, and you can play the favour previews. However, when you try to watch a video in the player, they do not play. It seems to be a CORS issue.

For that, we have tried several things:

  • CORS settings in Haproxy, Apache and nginx. Individually, and collectively.
  • Get SSL down to the Apache and Nginx, did not work either.
  • Put the nginx with the front host. Did not work either.

It looks like it boils down to SSL issues between either Nginx and Apache or Nginx and Haproxy. Either way we see a problem clearly that we do not understand:

Setting up VOD_PACKAGER_HOST and

I understand this is to set a machine that does the ‘heavy lifting’ for the transcoding and live streaming of video.

That is great, but, when a video is played it looks like the stream go from the front to that host over the Internet ( it is not a host to host internal request, but a public one).

So if you upload a video on a Kaltura front, it works, it transcodes, but then when playing it, it calls that host.

Now, lets say we have:

front1 and front2 in and 11 respectively.
vod1 and vod2 ( or whatever) in 20 and 21.

If the public hostname , in the LB is:

And you setup intenally a vip, or to make it easier, vod1 internal ip as the VOD_PACKAGER_HOST, this will eventually be not good as when you play a video inside the player is going to go, for example, to vod1:8443. And of course it is not going to fly as the request will not reach it.

Now if you make the nginx host reachable on the Internet, for example through the same LB, you can send it to a different back-end and it should work, right?

It kind of does, if you try something ‘static’ like the nginx_status or vod_status. However, when the front calls a video in vod from the player frame, it does not load, it falls into the CORS pit, and we see different messages in LB, Apache and Nginx ( I will put them below).

So in LB we see a 200 ( the frame loads ), in Apache a big error ( the resource does not load ), and in nginx an SSL handshake error to upstream.

Something intriguing is also this: it is NOT clear 100% to us what call the VOD_PACKAGER_HOST and where it is configured from.

the ANS file, when called, seems to be the one that sets
local.ini in the configurations directory. However, once set, it does not change in the database ( I would expect that should be changeable easily ).

instead, even after doing the corresponding changes in ANS files, for posterity and in the local.ini where the vod string is set, the only way to make an effective change on the way the application behaves ( what it uses when hitting play in a video in the video player ), seems to be by DIRECTLY editing the value in the DB, in the delivery_profile table.

The output we get right now:

The LB:

Oct 1 16:31:26 localhost haproxy[111550]: [01/Oct/2021:16:31:26.595] public~ dynamic/dynsrv1 0/0/0/185/185 200 913 - - --VN 1/1/0/0/0 0/0 {} {} "GET /p/102/sp/10200/playManifest/entryId/0_56yy00s5/flavorIds/0_swut18ka,0_f6tewc72,0_xlyn2tmi,0_wt1bf91a,0_bk0k2xk8/format/mpegdash/protocol/https/a.mpd?referrer=aHR0cHM6Ly9tZWRpYS5mZXRmaWxtcy5jb20=&ks=MTk1YzhjMDk4ZDM5NjhjNDQzNWFhNDk1YTM4MDZkZmJkMmJkYmQ4ZHwxMDI7MTAyOzE2MzMxMzAwOTA7MjsxNjMzMDQzNjkwLjU4MjtndWlsbGVtLmxpYXJ0ZUBhbWMubHU7ZGlzYWJsZWVudGl0bGVtZW50LGFwcGlkOmttYzs7&playSessionId=33b7953b-dc95-efb0-2717-09b1236f3823&clientTag=html5:v2.85&uiConfId=23448173&responseFormat=jsonp&callback=jQuery11110458314859665804_1633098649674&_=1633098649675 HTTP/1.1"

Play request is sent to dynamic ( the front running Apache). - - [01/Oct/2021:16:35:13 +0200] “GET /p/102/sp/10200/playManifest/entryId/0_56yy00s5/flavorIds/0_swut18ka,0_f6tewc72,0_xlyn2tmi,0_wt1bf91a,0_bk0k2xk8/format/mpegdash/protocol/https/a.mpd?referrer=aHR0cHM6Ly9tZWRpYS5mZXRmaWxtcy5jb20=&ks=MTk1YzhjMDk4ZDM5NjhjNDQzNWFhNDk1YTM4MDZkZmJkMmJkYmQ4ZHwxMDI7MTAyOzE2MzMxMzAwOTA7MjsxNjMzMDQzNjkwLjU4MjtndWlsbGVtLmxpYXJ0ZUBhbWMubHU7ZGlzYWJsZWVudGl0bGVtZW50LGFwcGlkOmttYzs7&playSessionId=542b3a22-9583-143a-ba46-46cca1f8b5ad&clientTag=html5:v2.85&uiConfId=23448173&responseFormat=jsonp&callback=jQuery1111025388460390376655_1633098903176&=1633098903177 HTTP/1.1" 200 915 0/177076 “ - xxx sex videos free hd porn Resources and Information.” “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36” “-” “-” “” 30193 1105483492, 1633098913 - 1339 “-” “” “-” “-” “no-store, no-cache, must-revalidate, post-check=0, pre-check=0” -
[errorMessage] => pid : 102 | uiconfId : 23448173 | referrer : | didSeek : false | resourceUrl :,0_f6tewc72,0_xlyn2tmi,0_wt1bf91a,0_bk0k2xk8/format/mpegdash/protocol/https/a.mpd?referrer=aHR0cHM6Ly9tZWRpYS5mZXRmaWxtcy5jb20=&ks=MTk1YzhjMDk4ZDM5NjhjNDQzNWFhNDk1YTM4MDZkZmJkMmJkYmQ4ZHwxMDI7MTAyOzE2MzMxMzAwOTA7MjsxNjMzMDQzNjkwLjU4MjtndWlsbGVtLmxpYXJ0ZUBhbWMubHU7ZGlzYWJsZWVudGl0bGVtZW50LGFwcGlkOmttYzs7&playSessionId=542b3a22-9583-143a-ba46-46cca1f8b5ad&clientTag=html5:v2.85&uiConfId=23448173 | userAgent : Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36 | playerCurrentTime : 0 | playerLib : Native | streamerType : mpegdash | message : {“severity”:2,“category”:1,“code”:1002,“data”:[" - xxx sex videos free hd porn Resources and Information.
,swut18ka,f6tewc72,xlyn2tmi,wt1bf91a,bk0k2xk8,/forceproxy/true/name/a.mp4.urlset/manifest.mpd”],“handled”:false} | code : 1000 | key : 1000 |
[1] => pid : 102 | uiconfId : 23448173 | referrer : | didSeek : false | resourceUrl : - xxx sex videos free hd porn Resources and Information. | userAgent : Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36 | playerCurrentTime : 0 | playerLib : Native | streamerType : mpegdash | message : {“severity”:2,“category”:1,“code”:1002,“data”:[“,swut18ka,f6tewc72,xlyn2tmi,wt1bf91a,bk0k2xk8,/forceproxy/true/name/a.mp4.urlset/manifest.mpd"],"handled”:false} | code : 1000 | key : 1000

The front gets the request from the LB and tries to send it to the VOD ( instead of internally, publicly (??) ), and fails.

2021/10/01 16:40:32 [error] 21648#21648: *36 SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream, client:, server: _, request: “GET /hls/p/102/sp/10200/serveFlavor/entryId/0_56yy00s5/v/2/ev/6/flavorId/0_wt1bf91a/name/a.mp4/index.m3u8 HTTP/1.1”, subrequest: “/kalapi_proxy/hls/p/102/sp/10200/serveFlavor/entryId/0_56yy00s5/v/2/ev/6/flavorId/0_wt1bf91a/name/a.mp4”, upstream: “”, host: “”, referrer: “
2021/10/01 16:40:32 [error] 21648#21648: *36 open() “/etc/nginx/html/50x.html” failed (2: No such file or directory), client:, server: _, request: “GET /hls/p/102/sp/10200/serveFlavor/entryId/0_56yy00s5/v/2/ev/6/flavorId/0_wt1bf91a/name/a.mp4/index.m3u8 HTTP/1.1”, host: “”, referrer: “

So, when clicking the play button in the player ( which renders perfectly ), from the LB, as it is the one that takes the response to a valid request, as both media and vod use the same entry point, handles it and sends it to the nginx vod, but fails with an SSL issue.

We tried nginx with and without SSL configured, with SSL terminating in the LB. The same wildcard certificate is used. That works perfectly in the Apache front, but not so on the vod.

As general questions I would ask:

Where is the documentation for all this?

Can someone give us hand with some input or documentation?


Looking at this:

It looks like the play controls calls correctly for the VOD server, but then, on the way back, the VOD tries to talk with the internal Load balancer IP (?). Why is it doing that? And WHY on port 80???

So, it looks like it is mixing ports, and networks.

I can get without issues to nginx_status and vod_status.

I can see the previews for video flavours.

The problem is when playing from within the player.

I have the exact same set-up, ALL in one, working perfectly. Can someone help me?

Hi @guillem_liarte ,

Please go over this thread Install kaltura Nginx VOD module - #24 by jess.
It includes all the relevant configuration files with annotations.
After reading this, should you have additional questions, please post them here, along with the configuration blocks noted in the thread above and we’ll help you further.

@jess Thank you, I have been there before, but I will give it another good read.

I know my streaming works, as I can stream to VOD host and I can see the stream with VLC. However, the KMC player or preview of live stream from the front, does not seem to work correctly:

We have managed the VOD playing from KMC and from API, but NOT the Live Streaming. What configuration settings should I provide to get help?

I am definitely going to have another look to that document you wrote @jess

Thank you for the reply it is most appreciated :slight_smile:

@jess I checked our config against the one in that article.

In the essential, it is correct and consistent.

Just to make it clear, with current configuration:

  • Everything works except playing live streams.
  • VOD does not work through the LB, only works over the internet directly.
  • I have to put an entry in VOD hosts file to let it know about the front ( ) pointing back at the internal LB vip.

That makes playing videos possible, but playing live streams still does not, while the streams are created and can be played by other tools.

My suspicion is that, once more this will be linked to names and the way things work when NOT in AIO setup, for which, yes the documentation is scarce and scattered.

@guillem_liarte ,

From my experience, if you want to run a clustered environment, you need to go beyond the configuration in order to understand how things work.

The first thing to do, is to understand that each production node (VOD, live, API, etc.) needs to be consistent.

Let me give you an example:

  • You have a VOD packager. Nginx need to call the Kaltura API in order to get the manifest and pull the files from a common storage.
    -Make sure that each instance calls the same server/api or that the information is consistent.

  • If you run a LIVe stream, you usually send your RTMP stream to one node. If you are calling the same stream from another node, it won’t work because the transcoding is happening elsewhere.

You can easily replicate your origin RTMP to all LIVE nodes if that is what you want, and it will probably work.

The configuration instructions are correct, in my opinion,.

I give you two hints:

  • In order to make all your nodes more resilient, use your /etc/hosts and hard wire your host names. This way each node will find everything without needing to make a DNS call.
  • Use a CDN to handle the traffic. Otherwise, your cluster will not do that much, even if it’s LB.

I hope this helps,


@ david.eusse Thanks for you input.

We are already doing that ( using hosts files for internal identification of each member, as well as VIPs where appropriate) and yes we have a CDN ready to be activated for when we go on production.

The instructions for clustering alone, do not produce a working cluster, just as they assume a series of factors that do not happen necessarily, like how the nodes know each other, or an actual complete set of services that need to be running.

The VOD part works only after some adjustments and, SSL will almost never work on the first go, it needs extra lines for some certs like Lets encrypt certs ( which you may want to use until you are full onboard on production and use a commercial one).

Regarding the common storage:

We are mounting the storage from an existing NFS from glusterFS and we are mounting it in /opt/kaltura/web.

All hosts have: 7514105856 136015048 7378090808 2% /opt/kaltura/web

BTW is for the storage is for the applications

The nodes that are publicly expose are the LBs and for now, the only VOD that I am trying to make to
work, but of course they are others. We want to hide that behind the LBs if possible.

The instructions for clustering are in that respect fall short in the sense that creating an ANS file and running it for each node will not deliver a working solution.

I do agree, that one must dive into the config and learn it, I am all for it, and that is why I am here and not bough an out of the box solution alternative.

For example, in my case, videos only started to play after this was changed:


The rationale behind why this is needed in our case, still beats me.

In any case, David, thanks for you reply, if there is any way you think you could help me further I am all open to suggestions and likewise if there is anything I can do, let me know.

For the moment I am running the cluster with only one API node and one VOD ( and live ) nodes. So no effective redundancy is at play. Furthermore, the only way the VOD works is being called outside the LB, over the internet.

That setup is working for uploaded videos, but it is not working for live videos ( live video works when the same stream is open in VLC for example , but not when called within KMC or over the API for example ).

And yes we use the port 1935 for live, and streams can be accessed in VLC via:

But same is NOT found in KMC ( see screen shot ).

So, yes, I get what you say, yes, we apply it ( or have it ready ).

So far the issues are:

  • VOD thorough the load balancer is not working, only called over the public host IP.
  • Live Streams do not work from KMC, but they exist and work if used with VLC, or similar.

Hopefully this sheds more light to what we are experiencing and someone can help me to go in the right direction.

Hello @jess and @david.eusse and thanks for you replies. And all the patience you have to read my issues. Any help is very appreciated.

This is what is open in on of my LBs ( active one, we use keepalived with VIPS between two haproxies ):

tcp        0      0 *               LISTEN      359324/haproxy      
tcp        0      0*               LISTEN      359324/haproxy      
tcp        0      0   *               LISTEN      359324/haproxy      
tcp        0      0*               LISTEN      359324/haproxy      
tcp        0      0   *               LISTEN      359324/haproxy      
tcp        0      0 *               LISTEN      359324/haproxy      
tcp        0      0  *               LISTEN      359324/haproxy      
tcp        0      0*               LISTEN      359324/haproxy      
tcp        0      0*               LISTEN      359324/haproxy      
tcp        0      0*               LISTEN      359324/haproxy 

This is the relevant part of the /etc/hosts in LBs: front1 vod1

In front1 ( the API, dynamic content )

the /etc/hosts must be like that in order for KMC to even load: --> if I put the LB IP here, KCM does NOT load.       -->  7 is the LB where I am testing on.

in vod1:     --> this needs to be set for videos to get to play```

Part of all this is because the HOSTNAMES of the hosts are not front1, front2, vod2, vod2 but other names that respond to a datacentre nomenclature.

So, if I have things like default in Apache or _ in Nginx , most things like SSL certs do not work at all, as name does not match.

And you will ask, WHY do you have a cert internally IF you terminate SSL in the LB? because of this issue:

Enabling SSL in front allows us to be able to log in, as port 443 matches what the application is shoving in the hostname part of the request, as for some reason instead of just hostname, hostname:port is used. As seen in the ANS file:

# host and port

So right now, I see NO actual problems on logs, if I load KCM and go to view the stream it gives no errors, but it tells

“Currently not broadcasting”


While at the same time I have a camera broadcasting this from OBS, and I am watching it in VLC


That is, by playing:

Try to use your browser dev tools and check the Manifests Urls.

Fix the virtual host names on nginx if necessary. That might help.


Hello @david.eusse and THANK you for the reply.

I use the dev tools to debug all this of course :slight_smile:

As I explained, there are no visible errors in network or console that I can identify in the browser. I try to understand what you mean with “Manifests urls”, as for what I see in my browser ( I am looking at Firefox right now, but also checked in Chrome ), what I see in terms of Manifests only shows me this:

I am quite sure this is NOT what you mean.

I see for example that while that page is open, it is trying to open the stream:

But nothing is displayed, sadly.

In the meantime, the strem is there and showing in VLC.


Video uploaded: YES
Live Stream: NO

And they are both in theory coming from the same place.

Of course this is a misconfiguration somewhere in our side. But I really do not get to see what is wrong.

Check the downloaded files on “network” tab.

You can see exactly what the player is calling.

Yes precisely that:


Bu I do not see anything specific to the live stream '1ab1ac00-8067-4a43-905f-630b4919ef10' .

That for me in principle it would be fine, as it would obfuscate the real file name.

It is not indistinguishable from a non working stream; that is.

Would i conclude that this setup is not able to find the files pertaining to the streams?

In the nginx.con the locations are:

                dash_path /opt/kaltura/web/content/tmp/dashme/;
                hls_path /opt/kaltura/web/content/tmp/hlsme/;

Then in kaltura.conf:

```               location /dashme {
                        open_file_cache off;
                        root /opt/kaltura/web/content/tmp/;
                        add_header Cache-Control no-cache;
                        # To avoid issues with cross-domain HTTP requests (e.g. during development)
                        add_header Access-Control-Allow-Origin *;
                location /hlsme {
                        open_file_cache off;
                        types {
                                application/ m3u8;
                        root /opt/kaltura/web/content/tmp/;
                        add_header Cache-Control no-cache; # Prevent caching of HLS fragments
                        add_header Access-Control-Allow-Origin *; # Allow web player to access our playlist

I do see the files being created there and accessing the stream with VLC with:

Work perfectly.

The ONLY problem we have is when the stream is invoked from KCM or its API.

The files are created of course, and are available:

[root@ndoamsel120 hlsme]# pwd
[root@ndoamsel120 hlsme]# ls -l *630b4919ef10* | tail -n 10
-rw-r--r-- 1 kaltura kaltura  113364 Oct  6 20:34 1ab1ac00-8067-4a43-905f-630b4919ef10_src-26.ts
-rw-r--r-- 1 kaltura kaltura  843556 Oct  6 20:27 1ab1ac00-8067-4a43-905f-630b4919ef10_src-2.ts
-rw-r--r-- 1 kaltura kaltura  454020 Oct  6 20:28 1ab1ac00-8067-4a43-905f-630b4919ef10_src-3.ts
-rw-r--r-- 1 kaltura kaltura  200032 Oct  6 20:28 1ab1ac00-8067-4a43-905f-630b4919ef10_src-4.ts
-rw-r--r-- 1 kaltura kaltura  409652 Oct  6 20:28 1ab1ac00-8067-4a43-905f-630b4919ef10_src-5.ts
-rw-r--r-- 1 kaltura kaltura  311328 Oct  6 20:28 1ab1ac00-8067-4a43-905f-630b4919ef10_src-6.ts
-rw-r--r-- 1 kaltura kaltura  212064 Oct  6 20:28 1ab1ac00-8067-4a43-905f-630b4919ef10_src-7.ts
-rw-r--r-- 1 kaltura kaltura  215448 Oct  6 20:28 1ab1ac00-8067-4a43-905f-630b4919ef10_src-8.ts
-rw-r--r-- 1 kaltura kaltura  248724 Oct  6 20:28 1ab1ac00-8067-4a43-905f-630b4919ef10_src-9.ts
-rw-r--r-- 1 kaltura kaltura    1696 Oct  6 20:34 1ab1ac00-8067-4a43-905f-630b4919ef10_src.m3u8

Is there any specific part I must change anywhere , like in API config in the front? It does not look that should be the case.

@jess do you have any advice for us?

@david.eusse again, thanks for your suggestions.