Upload with the Apache commons Fileupload API

I have an existing file upload and processing routine that I would like to integrate with the Kaltura API’s but I am having trouble finding Javadocs and other documentation and examples. Currently, the frontend form handles the upload in conjunction with the Apache commons libraries and the file is uploaded to OUR server and then processed. What I would like to do is directly upload to the Kaltura server without first uploading the file to our server. I created a test program using the example found in the Github repository (UploadTest java code). It works great but is based on passing the full path to the file in the API. What I’d like to do is just use an InputStream/FileInputStream and directly write that to the Kaltura server (I have a trial account at Kaltura.com that I am testing with)

Is there any way to use the client.getUploadTokenService().upload() API to send an InputStream up to the server? I’d like to eliminate the “staging” of the file on our server before passing it up to the Kaltura server for processing.

Hi @phelgren,

I don’t have a Java code sample for that but I do have several JS ones;
This uses a sample code from the webrtc project to capture the webcam input and ingest the blob directly to Kaltura so that no additional server side code is needed:

This example uses the resumablejs lib to get a file from the user via a HTML form, split it into smaller chunks and upload it to Kaltura:


again, pure JS, not server side code is needed.

Hope that helps. If not, please provide your full code sample and I’ll see what I can do to help you further.

The kicker isn’t the client side. The client HTML page and JS are running on a Liferay 7.0 framework server so getting the file UP to where our server can process the file isn’t the issue. The client and server we currently run basically has pause/resume/cancel all built in and has been running for years. The transcoding side is handled by a set of Java libraries that wrap FFMPEG. So the steps are:

  1. Upload the file from the client using JS.
  2. Once the file is uploaded spawn a Java thread to transcode the file

I want to replace step two with spawning a thread that basically “pushes” the file that is already on the server up to the Kaltura server. Since the file is streamed up to the Kaltura server using a FileInputStream (in the ChunkedUpload class) it just seemed more efficient to pass the file from OUR server to the Kaltura server as a FileInputStream as well.

The Java code is Kaltura’s code posted in Github: https://github.com/kaltura/Sample-Kaltura-Chunked-Upload-Java

I may try to modify the ChunkedUpload class to take a file inputstream rather than the file URI and see if I can get that to work. I am trying to keep the code changes on our server to a minimum so I can test performance and also avoid changing the UI at the client in our existing implementation which is why I am reluctant to use the JS you referenced.

Thanks for that info though. I may circle back around if I can’t modify the existing classes to work as I would like

Hi @phelgren,

In regards to the Java code you’ve been looking at, this repo is better:

because it supports uploading several chunks in parallel.

What I don’t understand though, is why you need to upload the file from the client to an intermediate server to begin with? Since Kaltura’s ingestion flow also includes transcoding [using FFmpeg] to multiple, customisable flavours, you can skip what you currently do with Java and upload the user’s source file [as received from the JS client code] directly to Kaltura and the transcoding will then take place, on Kaltura’s servers.

However, if the file is already on that intermediate server, then why can’t you use https://github.com/kaltura/kaltura-chunked-upload-test as is? It already accepts a path to the file and you have that path on the server’s local FS… what am I missing?:slight_smile:

The “missing” piece is that we track which class, logged in user, and some other items in a DB record on the server. We also allow them to upload a text file that is associated with the media (audio/video) and we send out notifications by email, track access when the file is played and some other housekeeping items, so what I am trying to do is retain all that metadata and tracking info for the 1200 classes and 400,000 users we have. ALL of that code is written and tested so I was hoping that when the InputStream arrives at our server from the JS client I could just pass it on to Kaltura, WITHOUT the intermediate stop, basically just passing the stream through while collecting the metadata. The API I looked at needs a path to the file in order to upload it to the server and that is what I have working at the moment. But that means the upload time to our server happens first and then the upload to the Kaltura server is next. Two steps that are unnecessary but that is the API I found.

Using the JS client that you posted might be a way to handle stuff in the future but with so many moving parts in the current application which are needed for our own detailed tracking metrics, I don’t want to have to rewrite too much code.

I get ALL the advantages to having Katura do the heavy lifting on the transcoding and playback piece, wrapping our needs around that is the challenge.