I initiate the upgrade of my single-server with non-ssl apache (because I’m using an offload SSL Proxy in front) but the /opt/kaltura/bin/kaltura-config-all.sh script in not asking me anymore if I want my Apache VHOST to use SSL or not.
It’s immediately asking me for the certificate files.
Please input path to your SSL certificate[/etc/ssl/certs/localhost.crt]:
Please input path to your SSL key[/etc/pki/tls/private/localhost.key]:
Please input path to your SSL CA file or leave empty in case you have none:
Now my installation is broken.
Any chance to fix this ?
I tried using the .ans file with option : IS_SSL="false"
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
"acknowledged" : true
Redirecting to /bin/systemctl restart httpd.service
Redirecting to /bin/systemctl restart memcached.service
Stopping kaltura-elastic-populate (via systemctl): [ OK ]
Starting kaltura-elastic-populate (via systemctl): Job for kaltura-elastic-populate.service failed because the control process exited with error code. See "systemctl status kaltura-elastic-populate.service" and "journalctl -xe" for details.
kaltura-config-all.sh FAILED with: 28606 on line 72
Archving logs to /opt/kaltura/log/log_04_09_20_13_30.tar.gz...
The systemctl status kaltura-elastic-populate.service send :
[root@ouivid conf.d]# systemctl status kaltura-elastic-populate.service
● kaltura-elastic-populate.service - LSB: Control the Kaltura elastic populate daemon
Loaded: loaded (/etc/rc.d/init.d/kaltura-elastic-populate; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since ven. 2020-09-04 13:30:46 CEST; 5min ago
Process: 31921 ExecStart=/etc/rc.d/init.d/kaltura-elastic-populate start (code=exited, status=1/FAILURE)
sept. 04 13:30:44 ouivid systemd: Starting LSB: Control the Kaltura elastic populate daemon...
sept. 04 13:30:44 ouivid kaltura-elastic-populate: ElasticSearch-Populate est mort mais le fichier pid existe
sept. 04 13:30:44 ouivid su: (to kaltura) root on none
sept. 04 13:30:46 ouivid kaltura-elastic-populate: ElasticSearch-Populate est mort mais le fichier pid existe
sept. 04 13:30:46 ouivid systemd: kaltura-elastic-populate.service: control process exited, code=exited status=1
sept. 04 13:30:46 ouivid systemd: Failed to start LSB: Control the Kaltura elastic populate daemon.
sept. 04 13:30:46 ouivid systemd: Unit kaltura-elastic-populate.service entered failed state.
sept. 04 13:30:46 ouivid systemd: kaltura-elastic-populate.service failed.
Should I wait for the archive and send it to you ?
If the “bailing out!” error is occurred, database setup about datawarehouse is incomplete.
Please see an error message described in tale of “/opt/kaltura/dwh/setup/installation log.log”.
If the error message is “Too many partitions (including subpartitions) were defined”, the following topic may be helpful.
I use the CentOS 7 on the VirtualBox too.
In the CentOS 7, “elasticsearch” service cannot work.
So, I configured the Kaltura CE 16.5.0 without the “elasticsearch” service.
And, I faced an issue about mariadb service.
“kaltura-db-config.sh” script created database users, “kaltura@%” and “etl@%”.
But, access was denied from localhost.
So that, I modified the “kaltura-db-config.sh” to create users, “kaltura@localhost” and “etl@localhost”.
In cluster environments, “kaltura@%” and “etl@%” should be added after initial configurations.
Thank you for your answer.
DWH analytics problem (ERROR 1499 (HY000)) Backend Server Support
I am aware of the above and will refer to it.
(The problem is that there are a lot of files to change, and there is an issue where I have to write a script or a program)
I wrote before about problems 1, 2, 3, 4, and 5, but problems 3, 4, and 5 have been solved.
(By registering to Add New Publisher in Kaltura Admin Console.)
As for elastic, I think there are two elasticsearch and kaltura-elastic-populate, but I don’t know if both of them work.
What do you think will be affected if the services are not running?
I’ve been having problems with localhost, so I’m trying to set the hostname correctly and register it with DNS.
The problem now is stream delivery.
For the URL I created in KMC, 37 people could see the video and 12 people could not.
I’m thinking that the video is not being streamed.
Do you know if I need red5 or something else for stream distribution?
(red5 doesn’t seem to be working.)
Before execution of “kaltura-config-all.sh”, I modified SQL files manually.
So, I was very tired.
I guess, if “elasticsearch” and “kaltura-elastic-populate” services do not work, your system cannot use APIs about elastic search.
And, shpinx search engine cannot create indexes safely, because your system cannot create indexes for elastic-search.
I cannot solve the problem about “kaltura-es-config.sh” and “elasticsearch” service on the CentOS 7.
Therefore, I modified “kaltura-config-all.sh”, “/opt/kaltura/app/configurations/elastic.ini.template”, and “/opt/kaltura/app/configurations/base.ini” in order to skip configurasions about “elasticsearch”.
So, “elasticsearch” and “kaltura-elastic-populate” services do not work on my system, and the sphinx search engine does not use indexes for the elastic-search.
I don’t know the problem about red5.
My system does not use live streaming, and does not use the read5.
Thank you very much, I understand that Red5 is not necessary.
How exactly can I do HTTP streaming of the uploaded video?
I think I need the profile when I select the file in “CREATE”-“Upload from Desktop” and choose “Transcoding profile”. Am I wrong?
(I don’t think it’s “CREATE” - “Create Live Stream Entry”.)
If it is HLS, I think it will create (.m3u8) and (.ts) as a result of the encoding.
Upload method is independent of the delivery format.
You do not need to use a special profile during uploading of media.
Similarly, there is no need to create a .m3u8 file.
If the users use the URL of video in order to access the video directly, you can set “format” parameter to “applehttp” in the URL.
Please see the following web page:
In the above method, Web browser’s native HTML5 player or media player application in user’s device plays the video.
When using the “Embed Code” or “Standalone Preview” URL that can be obtained from the KMCng, the video is played in the Kaltura HTML5 player.
In recent versions, default delivery format of the Kaltura HTML5 players is HLS.
Therefore, no special settings are required.
If you want to use other delivery format (Progressive-Download, HDS, MPEG-DASH, etc.), you must use special setting in “UI variables” of the players.
The .m3u8 and .ts files are automatically generated and are transmitted automatically, when the Kaltura server receives a play-back request from the user.
As a result of testing in the office using the “standalone preview” URL, 37 people were able to play the video, and 12 people were unable to.
Via VPN, they all failed. On the same floor, with the same OS version and browser version, some people were able to play the video and some were not.
I thought the cause was that the stream was not working.
That’s why I was looking into it, but I was wondering if it doesn’t matter what the stream is, if it plays on HTML5 Player, on the same floor, same OS, same browser, same version.
It takes a while for the video to start, so I thought it was a download type, not a stream.
What do you think is the cause?
I would appreciate any advice you can give me.
When you use the Internet Explorer to play videos, please pay attention to settings of proxy and compatibility mode (or compatibility view).
If the Internet Explorer works under old mode (ex. IE 5 or 7), it cannot interpret streaming video data.
The VPN connection was also a Proxy-related setting. Thank you very much for your help.
I’m not sure if I should talk about it here.
Do you know if it is possible to set up server 1 and server 2 and use a load balancer to allow multiple people to play a video at the same time?
The Kaltura CE supports multiple server.
So that, many users can play videos concurrently.
Since multiple all-in-one servers do not work as one system, you should separate delivery server (front node) and back-end server (DB server, data-warehouse, batch server, etc.).
And, you should build multiple delivery servers and back-end server(s).
Happy New Year to you all.
Now, about the server distribution, does the minimum configuration require three servers, two(front node) for distribution and one for the back-end server?
Or do you think one of the servers can be used for both distribution and back-end, and the other server for distribution will be a two-unit configuration?