Installation broken after upgrade from 15.6 to 16.4 with non-ssl

Hello @Sakurai,

When you running “kaltura-config-all.sh” or “kaltura-db-config.sh”, the users “kaltura @%” and “etl @%” should not exist.
These users cause errors about “API request fails” and “bailing out” during configurations.

If video can be played successfully, then your database has been created successfully.
Playback count of video is not updated immediately.
Log-rotation daemon copies log-files of the Apache and nginx to “/opt/kaltura/web/logs” directory.
At the CentOS, the log-rotation daemon runs between 3:00 AM and 4:00 AM.
Then, cron task (etl_hourly.sh) inserts records into the database according to log-files.
Finally, “dwh_plays_views_sync.sh” updates playback count of each video at 10:00 AM.

If the playback count is not updated, please see log-files in “/opt/kaltura/dwh/logs” directory.

Regards,

Hello @t-saito

I don’t know why, but I make sure that “kaltura@%” and “elt@%” and “kaltura@localhost” and “etl@localhost” are created immediately after running “kaltura-config-all.sh” as follows.

I have made two modifications before running “kaltura-config-all.sh”.
One, in “kaltura-db-config.sh”, I changed “kaltura@’#’” and “etl@’#’” to “kaltura@localhost” and “etl@localhost”.
Another change is to move “kaltura-es-config.sh” to run after “kaltura-db-config.sh”.
This is because “kaltura-es-config.sh” causes an error if the user kaltura and etl are not present when executing it.

I figured out from your advice that the count does not change immediately.
If you can tell me, is there any way to test and confirm that the count changes immediately?
Also, do you know if there is a way to check who has accessed the site and for how long in Kaltura?

Regards,

Hello @Sakurai,

The “kaltura-config-all.sh” and “kaltura-db-config.sh” do not delete existing users.
So, I think that the previously created “kaltura@%” and “etl@%” remain in the database.
If all tables of database are created successfully, these users will not cause any troubles in the future.

In order to check the Kaltura DWH, please execute “/opt/kaltura/bin/kaltura-run-dwh.sh”.
Note that, this script moves apache and nginx log-files into “/opt/kaltura/web/log” directory.
So that, after execution of this script, apache and nginx log-files become empty.
And, today’s log-files in “/opt/kaltura/web/logs” are overwritten.

Regards,

Hello @t-saito

I’m sorry if I’m rambling.
This is the first time I ran “kaltura-config-all.sh”.( I ran “kaltura-config-all.sh” for the first time, and I reverted to the original snapshot and ran it from the beginning.)
Just before running “kaltura-config-all.sh”, I did the following.
Image6

As for “/opt/kaltura/web/logs”, I have confirmed that the file is created.
Apart from that, can’t I check the access immediately in KMC?
I want to see who played the video and for how long in KMC’s “User Analytics”.
Is this not possible?

Regards,

Hello @Sakurai,

If execution of “/opt/kaltura-run-dwh.sh” is finished successfully, playback counts of each video are updated immediately.
Updates to analytics data will be reflected immediately in the KMC.
If playback counts are not updated, process of the “/opt/kaltura-run-dwh.sh” failed.
So, please check log-files in “/opt/kaltura/dwh/logs” directory.
When an error occurred, there exists a log-file contains “ERROR” or “Error” statement.

Regards,

Hello @t-saito

I found that the error is written in two files under “/opt/kaltura/dwh/logs”.

They are “etl_hourly-20210114-16.log” and “log_events_events.log”.

In “etl_hourly-20210114-16.log”, the following is written as an example.

ERROR 14-01 16:18:50,816 - parse bandwidth lines - Unexpected error
ERROR 14-01 16:18:50,816 - parse bandwidth lines - org.pentaho.di.core.exception.KettleValueException:
Javascript error:
Could not apply the given format dd/MMM/yyyy:HH:mm:ss on the string for 14/Jan/2021:12:34:23 : Format.parseObject(String) failed (script#15)

at org.pentaho.di.trans.steps.scriptvalues_mod.ScriptValuesMod.addValues(ScriptValuesMod.java:457)
at org.pentaho.di.trans.steps.scriptvalues_mod.ScriptValuesMod.processRow(ScriptValuesMod.java:688)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:40)
at java.lang.Thread.run(Thread.java:748)

“log_events_events.log” contains the following as an example.

2021/01/14 16:18:50 - parse bandwidth lines.1 - ERROR (version 4.2.1-stable, build 15952 from 2011-10-25 15.27.10 by buildguy) : Unexpected error
2021/01/14 16:18:50 - parse bandwidth lines.1 - ERROR (version 4.2.1-stable, build 15952 from 2011-10-25 15.27.10 by buildguy) : org.pentaho.di.core.exception.KettleValueException:
2021/01/14 16:18:50 - parse bandwidth lines.1 - ERROR (version 4.2.1-stable, build 15952 from 2011-10-25 15.27.10 by buildguy) : Javascript error:
2021/01/14 16:18:50 - parse bandwidth lines.1 - ERROR (version 4.2.1-stable, build 15952 from 2011-10-25 15.27.10 by buildguy) : Could not apply the given format dd/MMM/yyyy:HH:mm:ss on the string for 14/Jan/2021:12:34:23 : Format.parseObject(String) failed (script#15)

Can you think of anything?

Regards,

Hello @Sakurai,

Sorry!
I forgot an important issue.
The Kaltura DWH retrieves log statements contains date-time string based on “dd/MMM/yyyy:HH:mm:ss” format.
Under the CentOS and Japanese locale environments, date-time format in log-files is “dd/MMM/yyyy:HH:mm:ssZ”.
Therefore, the Kaltura DWH does not retrieves playback events.
You should fixed some scripts.
Please see the following post.

This post is created based on the Kaltura 14.1.0.
In the current version, the lines to be modified in each file are different.
But, the things to fix are the same.

Regards,

Hello @t-saito

Thank you again and again.
This stopped the error I was getting, but I did see another error.

“etl_daily-20210115.log”.
“etl_hourly-20210115-14.log”
“log_aggregation_perform_aggregations.log”
“log_events_events.log”

In the first one, “etl_daily-20210115.log”, I have something like this

INFO 15-01 14:17:45,617 - Table input - Finished processing (I=1, O=0, R=1, W=1, U=0, E=0)
INFO 15-01 14:17:45,618 - Write to log - before calc - Finished processing (I=0, O=0, R=1, W=1, U=0, E=0)
ERROR 15-01 14:17:45,634 - aggregate plays - Unexpected error
ERROR 15-01 14:17:45,634 - aggregate plays - org.pentaho.di.core.exception.KettleStepException:
Error while running this step!

Couldn’t execute SQL: call kalturadw.calc_aggr_day_play(date(‘20210115’),14,‘plays_entry’)

Table ‘kalturadw.dwh_fact_plays’ doesn’t exist

Do you understand what “Table ‘kalturadw.dwh_fact_plays’ doesn’t exist” means?
First of all, I’d like to fix this in the future.

Regards,

Hello @Sakurai,

“Table not found error” means that tables in database are not created successfully.
If the “kaltura” and “kaltura_sphinx_log” databases are created successfully, videos can be uploaded and played.
But, the Kaltrua DWH requires other databases (kalturadw, kalturadw_ds, kalturadw_bisources, kalturalog).

If the “bailing out !” error occurs during the execution of “kaltura-config-all.sh”, “kaltura-db-config.sh”, or “kaltura-dwh-config.sh”, table creation will be interrupted.

Regards,

Hello @t-saito

I have checked.
“kaltura-config-all.sh”.
“kaltura-db-config.sh”
“kaltura-dwh-config.sh”

“kaltura-dwh-config.sh” had no errors in /opt/kaltura/dwh/setup/installation_log.log, so I tried to check the other two.

The first error was found when running “kaltura-config-all.sh”.

CREATE USER kaltura;
CREATE USER etl;
CREATE DATABASE kaltura;
CREATE DATABASE kaltura_sphinx_log;
CREATE DATABASE kalturadw;

Output for /opt/kaltura/app/deployment/base/scripts/insertContent.php being logged into /opt/kaltura/log/insertContent.log
Generating UI confs…
Restarting Kaltura Bundle builder API Server/usr/lib/node_modules/forever/lib/util/config-utils.js:22
throw new Error('Failed to create directory '+dir+":" +error.message);

Error: Failed to create directory /home/kaltura/.forever:ENOENT: no such file or directory, mkdir ‘/home/kaltura/.forever’

I therefore did the following before running “kaltura-config-all.sh”.
’ # mkdir /home/kaltura
’ # chmod 777 /home/kaltura

Then the error disappeared, but it was displayed differently.

Output for /opt/kaltura/app/deployment/base/scripts/insertContent.php being logged into /opt/kaltura/log/insertContent.log
Generating UI confs…
Restarting Kaltura Bundle builder API ServerKaltura Bundle builder API Server is not running.
Kaltura Bundle builder API Server is not running.
Starting Kaltura Bundle builder API Server

This is correct, I think, but the error about the table not existing was the same.
Do you know where I can find the log of “kaltura-db-config.sh” failing?

Regards,

Hello @Sakurai,

At the Kaltura CE 16.5.0, when Kaltura databases are created successfully, the following messages are printed:

Configuring your Kaltura DB...


Checking MySQL version..
Ver 5.5.68-MariaDB found compatible

CREATE USER kaltura;
CREATE USER etl;
CREATE DATABASE kaltura;
CREATE DATABASE kaltura_sphinx_log;
CREATE DATABASE kalturadw;
CREATE DATABASE kalturadw_ds;
CREATE DATABASE kalturadw_bisources;
CREATE DATABASE kalturalog;
Checking connectivity to needed daemons...
Connectivity test passed:)
Cleaning cache..
Populating DB with data.. please wait..
Output for /opt/kaltura/app/deployment/base/scripts/installPlugins.php being logged into /opt/kaltura/log/installPlugins.log
Output for /opt/kaltura/app/deployment/base/scripts/insertDefaults.php being logged into /opt/kaltura/log/insertDefaults.log
Output for /opt/kaltura/app/deployment/base/scripts/insertPermissions.php being logged into /opt/kaltura/log/insertPermissions.log
Output for /opt/kaltura/app/deployment/base/scripts/insertContent.php being logged into /opt/kaltura/log/insertContent.log
Generating UI confs..

If the databases are created successfully, no error occurs between “Ver 5.5.68-MariaDB found compatible” and “Connectivity test passed:)”.
.
When the DWH configurations are finished successfully, the following messages are printed:

Deploying analytics warehouse DB, please be patient as this may take a while...
Output is logged to /opt/kaltura/dwh/logs/dwh_setup.log.

sending incremental file list
MySQLInserter/
MySQLInserter/TOP.png
MySQLInserter/mysqlinserter.jar
MySQLInserter/plugin.xml

sent 2,647,325 bytes  received 77 bytes  5,294,804.00 bytes/sec
total size is 2,646,419  speedup is 1.00
sending incremental file list
MappingFieldRunner/
MappingFieldRunner/MAP.png
MappingFieldRunner/mappingfieldrunner.jar
MappingFieldRunner/plugin.xml

sent 90,962 bytes  received 77 bytes  182,078.00 bytes/sec
total size is 90,670  speedup is 1.00
sending incremental file list
GetFTPFileNames/
GetFTPFileNames/FTP.png
GetFTPFileNames/getftpfilenames.jar
GetFTPFileNames/plugin.xml

sent 7,310,938 bytes  received 77 bytes  4,874,010.00 bytes/sec
total size is 7,308,893  speedup is 1.00
sending incremental file list
FetchFTPFile/
FetchFTPFile/FTP.png
FetchFTPFile/fetchftpfile.jar
FetchFTPFile/plugin.xml

sent 5,784,791 bytes  received 77 bytes  11,569,736.00 bytes/sec
total size is 5,783,119  speedup is 1.00
sending incremental file list
DimLookup/
DimLookup/CMB.png
DimLookup/lookup.jar
DimLookup/plugin.xml

sent 3,689,114 bytes  received 77 bytes  7,378,382.00 bytes/sec
total size is 3,687,964  speedup is 1.00
sending incremental file list
UserAgentUtils.jar
ksDecrypt.jar

sent 54,896 bytes  received 54 bytes  109,900.00 bytes/sec
total size is 54,710  speedup is 1.00
current version 5999
DWH configured.

If the DWH is cofigured successfully, no error occurs between “Deploying analytics warehouse DB” and “DWH configured”.

The database creation logs are stored in “/opt/kaltura/dwh/setup/installation_log.log”.
The DWH configuration logs are stored in “/opt/kaltura/dwh/logs/dwh_setup.log”.
Other log-files are stored in “/opt/kaltura/log” directory.

Regards,

Hello @Sakurai,

There is one thing that makes me wonder.
Your Kaltura system was installed in “/opt/kaltura” directory.
But, “.forever” directory has been created in “/home/kaltura”.
The “.forever” directory is usually created in “/opt/kaltura”.
I seem that you set home directory of “kaltura” account to “/home/kaltura”.
Plese see, “/etc/passwd” file.
Unless you have a specific reason, the home directory of “kaltura” account should be set to “/opt/kaltura”.

By the way, it is also important for you to know the relationships between the shell-scripts for configurations.
“kaltura-db-config.sh” creates “kaltura@%” and “etl@%” accounts in the MySQL/MariaDB, and creates databases and tables in “kaltura” database.
(for the CentOS 7, it would be better to create “kaltura @ localhost” and “et @ localhost” rather than “kaltura@%” and “etl@%”.)
“kaltura-dwh-config.sh” creates many tables in “kalturalog”, “kalturadw”, and “kalturadw_ds” databases.
And, “kaltura-config-all.sh” runs many shell-scripts, including “kaltura-db-config.sh” and “kaltura-dwh-config.sh”.

If an error occurs during execution of the “kaktura-db-config.sh”, the MySQL/MariaDB accounts and databases creation will be incomplete.
If an error occurs during execution of the “kaltura-dwh-config.sh”, some tables in the database will be missing.

If you execute “kaltura-config-all.sh” or “kaltura-db-config.sh” for re-configurations, you should delete all the databases about the Kaltura system and all “kaltura” and “etl” accounts in MySQL/MariaDB before running those scripts.

After the configurations are finished successfully, you will have to modify directory permission settings and log-rotation scripts according to your environment.
Recently, we have found a problem that analytics does not work properly after January 1, 2020.
In order to resolve that problem, you should fixed some KTR file, or should update your Kaltura system to the version 16.14.0 (or later version).
Note that, updating works return various setting files to the standard contents.
For example, you must fix the date-time format again in the KTR files after the system update.

The Kaltura system runs on multiple operating systems and various language/locale environments, and has excellent delivery performance.
Instead, the installation programs and configuration scripts are not optimized for particular environment.
In order for the system to work properly in their environment, the administrators must modify some files before and after running the configuration scripts.
When I first installed the all-in-one Kaltura server, it took me half a year for the system to work properly.
After that, it took a few more months for the system to work properly in a cluster environment.

Forum members will be able to help you in case of trouble.
It will take a long time, so let’s do our best.

Regards,

Hello @t-saito

Thank you again and again.
I had no errors between “Ver 5.5.68-MariaDB found compatible” and “Connectivity test passed:)”.
I also had no errors between “Deploying analytics warehouse DB” and “DWH configured”.
(It was the same as the display you gave me.)
There is no error in “/opt/kaltura/dwh/setup/installation_log.log” either.

I don’t know why, but the file “/opt/kaltura/dwh/logs/dwh_setup.log” doesn’t exist.

The kaltura account in “/etc/passwd” was changed from “/home/kaltura” to “/opt/kaltura”.
(For this reason, I deleted the /opt/kaltura directory I created before running kaltura-config-all.sh.)

In this state, “Table ‘kalturadw.dwh_fact_plays’ does not exits.” appears in the log.
(etl_daily-20210118.log)
When I check with MariaDB [kalturadw]> show tables; I get

| dwh_fact_fms_sessions_archive
| dwh_fact_incomplete_api_calls
| dwh_fact_plays_archive
| dwh_hourly_api_calls
| dwh_hourly_errors

and I don’t see ‘dwh_fact_plays’.

Also, in “etl_hourly-20210118-09.log”, there is a file named

description = org.pentaho.di.core.exception.KettleDatabaseException:
Couldn’t execute SQL: call kalturadw_ds.transfer_cycle_partition(1)

You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ‘NULL’ at line 1

Am I doing something wrong?

Regards,

Hello @Sakurai,

If all the databases were created and some tables in the “katluradw” databases were not created, an error occurred during the DWH configuration.
When the DWH configuration ran, no error occurs between “Deploying analytics warehouse DB” and “DWH configured”?
After the DWH configuration is finished successfully, database read/write performance messages described above are printed.

Regards,

Hello @t-saito

I have confirmed the screen display as follows.

Configuring your Kaltura DB…


Checking MySQL version…
Ver 5.5.68-MariaDB found compatible

CREATE USER kaltura;
CREATE USER etl;
CREATE DATABASE kaltura;
CREATE DATABASE kaltura_sphinx_log;
CREATE DATABASE kalturadw;
CREATE DATABASE kalturadw_ds;
CREATE DATABASE kalturadw_bisources;
CREATE DATABASE kalturalog;
Checking connectivity to needed daemons…
Connectivity test passed:)
Cleaning cache…
Populating DB with data… please wait…
Output for /opt/kaltura/app/deployment/base/scripts/installPlugins.php being logged into /opt/kaltura/log/installPlugins.log
Output for /opt/kaltura/app/deployment/base/scripts/insertDefaults.php being logged into /opt/kaltura/log/insertDefaults.log
Output for /opt/kaltura/app/deployment/base/scripts/insertPermissions.php being logged into /opt/kaltura/log/insertPermissions.log
Output for /opt/kaltura/app/deployment/base/scripts/insertContent.php being logged into /opt/kaltura/log/insertContent.log
Generating UI confs…

and

Deploying analytics warehouse DB, please be patient as this may take a while…
Output is logged to /opt/kaltura/dwh/logs/dwh_setup.log.

sending incremental file list
MySQLInserter/
MySQLInserter/TOP.png
MySQLInserter/mysqlinserter.jar
MySQLInserter/plugin.xml

sent 2,647,329 bytes received 77 bytes 5,294,812.00 bytes/sec
total size is 2,646,419 speedup is 1.00
sending incremental file list
MappingFieldRunner/
MappingFieldRunner/MAP.png
MappingFieldRunner/mappingfieldrunner.jar
MappingFieldRunner/plugin.xml

sent 90,966 bytes received 77 bytes 182,086.00 bytes/sec
total size is 90,670 speedup is 1.00
sending incremental file list
GetFTPFileNames/
GetFTPFileNames/FTP.png
GetFTPFileNames/getftpfilenames.jar
GetFTPFileNames/plugin.xml

sent 7,310,942 bytes received 77 bytes 14,622,038.00 bytes/sec
total size is 7,308,893 speedup is 1.00
sending incremental file list
FetchFTPFile/
FetchFTPFile/FTP.png
FetchFTPFile/fetchftpfile.jar
FetchFTPFile/plugin.xml

sent 5,784,795 bytes received 77 bytes 11,569,744.00 bytes/sec
total size is 5,783,119 speedup is 1.00
sending incremental file list
DimLookup/
DimLookup/CMB.png
DimLookup/lookup.jar
DimLookup/plugin.xml

sent 3,689,118 bytes received 77 bytes 7,378,390.00 bytes/sec
total size is 3,687,964 speedup is 1.00
sending incremental file list
UserAgentUtils.jar
ksDecrypt.jar

sent 54,896 bytes received 54 bytes 109,900.00 bytes/sec
total size is 54,710 speedup is 1.00
current version 5999
DWH configured.

Regards,

Hello @Sakurai,

It seems that some tables were not created even through configurations scripts are finished successfully.
SQL statement to create “dwh_fact_plays” table is described in “/opt/kaltura/dwh/ddl/dw/facts/dwh_fact_plays.sql”.
Running this SQL statement with “etl” account of the MariaDB will create the “dwh_fact_plays” table.

Regards,

Hello @t-saito

I created the “dwh_fact_plays” table manually as you said.
As a result, I no longer get the “table does not exist” error.
Thank you very much.
I am seeing one more error.
It is in the “etl_hourly-20210119-09.log”.


cycle_id = 2
description = org.pentaho.di.core.exception.KettleDatabaseException:
Couldn’t execute SQL: call kalturadw_ds.transfer_cycle_partition(2)

Table has no partition for value 20210119

====================
INFO 19-01 09:40:39,696 - transfer partition - Finished reading query, closing connection.
INFO 19-01 09:40:39,696 - transfer partition - Finished processing (I=0, O=0, R=1, W=0, U=0, E=1)
ERROR 19-01 09:40:39,696 - Abort - Row nr 1 causing abort : [2], [1], [org.pentaho.di.core.exception.KettleDatabaseException:
Couldn’t execute SQL: call kalturadw_ds.transfer_cycle_partition(2)

Table has no partition for value 20210119
], [null], [ExecSQL001]
ERROR 19-01 09:40:39,696 - Abort - Aborting after having seen 1 rows.

What do you think is the cause of this?

Regards,

Hello @Sakurai,

My database does not have function “kalturadw_ds.transfer_cycle_partition”.
But, the error does not occur in my Kaltura server.

I noticed something strange.
The error message said “Table has no partition for value 20210119”.
In the Kaltura databases, tables have partition such as “p_202101” and “p_20210119”.
In almost all tables, partition names start with “p_”.
I think there is a mistake in SQL statement that creates the tables.

In order to see partitions, please execute the following SQL statement in the MySQL/MariaDB.

mysql> SELECT TABLE_SCHEMA,TABLE_NAME,PARTITION_NAME,PARTITION_ORDINAL_POSITION,TABLE_ROWS from INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA='kalturadw';

Please see “PARTITION_NAME” column in results.

If the partition names do not start “p_” in each table, you must fixed SQL statements about partition, and create the databases again.

Regards,

Hello @t-saito

Thank you for your advice.
I have checked the “PARTITION_NAME” field.
There are two types of names that do not start with “p_20”.
One is NULL.
The other was “p_0”. There are no other values.
Example.
| kalturadw | aggr_managment | NULL | NULL | 13

| kalturadw | dwh_fact_api_calls_archive | p_0 | 1 | 0 |

I am not familiar with the format of the sql file.
I am a little curious about something.

  1. Some of the sql files do not have a date line at the end.
  2. The date line at the end of the sql file is commented out in some cases.
  3. Some commented out statements have a ‘;’ after them, some do not.
  4. Some statements have a new line at the end, some do not.
  5. Some files are recognized as DOS-formatted by Emacs.

I don’t know what is correct. So, I am extracting the details.
Is there a cause for any of this?

Regards,

Hello @Sakurai,

Since the “aggr_managment” table has no partition, the partition name is “NULL”;
And, first partition of the “dwh_fact_plays_archive” table is “p_0”.
But, second and later partitions have other name such as “p_20201219”.

Some comment lines end with semicolon, and other comment lines do not ends with semicolon.
Maybe, there is no effect on the operation of the MySQL/MariaDB.
“Dos-formatted” and “new-line” at end of file are no effect on the operation of the MySQL/MariaDB.

I think it is a problem related to partition creation or function (procedure) registration.
Or, the procedure patch may not have been applied successfully.
The “transfer_cycle_partition” function has been fixed several times and the Kaltura source also contains some patches.
These pathes are applied during DWH configurations.

I don’t have a good idea.
Maybe it works fine with a clean install of the latest version.
Note that, SQL files of latest version also have the same issue (start date/month format of partition creation) as old versions.
In the latest version, you have to modify some SQL statements as you did in the previous version before initial configurations.

@jess , do you have solution idea?
Can you help us with this issue?

Regards,