Content Ingest Network – Empowering File-based Workflows

We are at an inflection point. From here on video begins to be the dominant form of Internet traffic.  Video represents over half of all consumer Internet traffic today and Cisco predicts that by 2014 91% of all Internet traffic will be some form of online video. More and more of this traffic is out of the browser and into an app, packaged by services such as Netflix, Hulu, Google.TV. The growth of online video has led to the development of content management tools (such as Ensemble Video) for ingest, cataloguing, transcoding and publishing of online video in a variety of forms for browsers, apps, tablets, and players. The infrastructure for delivering online video to users is the ubiquitous Content Delivery Network (CDN), from which there are over 50 to choose. There are also CDN optimization services (such as Namecast) which will send your packets via the optimal (fastest, best, cheapest) CDN at the moment and avoid congested routes. So the download path is very well represented.

Despite the commodization of CDN, there are very few symmetrical solutions for getting large video files uploaded into the network. Protocols like FTP do not work well when there is latency and/or packet loss, so uploading and transporting large files can be very time consuming. It doesn’t matter if you have a lot of bandwidth, the protocol can’t use it. This problem is already critical inside content creation organizations (broadcasters, studios, post production houses, game design and animation houses, ad agencies) where file-based workflows are displacing the old ways of doing things at a rapid pace. To make matters worse, in these environments, the content often can not be compressed until the very last stage of the process.

There are several solutions for this content ingest problem, depending upon the exact requirements. Software and appliance solutions can be deployed at the endpoints to replace the FTP protocol with a faster protocol. This allows all of the available bandwidth to be utilized for upload or transport, but requires a capital expense to purchase the endpoint appliances or software. A more elegant approach is to put the acceleration capabilities into the network itself, as Attend has done.

The beauty of this approach is that there is no capital expense, no hardware or software, no license to maintain, AND no change to the workflow. Because the latency between the customer and the nearest Attend point of presence is negligible, FTP works great. Between Attend PoPs (for example Los Angeles to New York or London to LA) the file is transmitted over a private high-speed infrastructure with no packet loss. Thus the file transfer can operate at maximum speed from end to end.

Today, top speeds of 100Megabytes per second are not unheard of. At this speed, the contents of an entire DVD (6 Gigabytes) can be uploaded or transferred in just over 1 minute, a Blu-ray (56 Gigabytes) in about 10 minutes and a full-resolution uncompressed feature (a Terabyte) in under 3 hours. Having special software or appliances on each end of the ingest / transfer can enable real-time streaming of uncompressed HD or 2K content, and file ingest/transfer up to 4-5 times faster, but requires capital equipment and changes to the workflow. The beauty of the Content Ingest Network is that it is available on-demand, to anyone with high-performance Internet access with no up front investment or change to the workflow required. Naturally this approach is not just applicable to video, but to any large file that needs to be uploaded or transported securely, reliably and quickly.

This entry was posted in Cloud Computing, Networking, Post production and tagged , , , , , , , , , , , . Bookmark the permalink.

6 Responses to Content Ingest Network – Empowering File-based Workflows

  1. Matt Miller says:

    Good summation Chuck. The need to move large files (SD/HD video, log files, presentations) from geographically disparate locations is a growing issue. Two ended solutions are just not practical, especially if hundreds or thousands of users (or systems) need to upload to a central application complex.

    Instrumentation of desktops, servers, devices, etc. alone would negate the benefit from a cost perspective. Providing a transparent solution with no capital expense is truly beneficial. Another key benefit is the ability to increase upload capacity instantly as a customer’s business grows.

    Reducing upload times will also benefit time sensitive material (commercial spots, pre/post production video, log files), reducing the overall cost of a service or project. The reduction also enables new applications and services that would have been nearly impossible to implement due to the data delay.

    • chuckstormon says:

      Agreed. The solution I’ve outlined would be ideal for companies like http://www.mediasilo.com, or http://www.wiredrive.com where a lot of users need to ingest a lot of content to a central cloud resource. I also think this solution would be ideal for companies building file-based workflows in-house, even without the cloud-storage element. In that case, the Content Ingest Network is actually used as a content transport network. Larger companies such as broadcasters and studios could either implement the architecture I’ve described using NIaaS or their own infrastructure if they prefer.

  2. Pingback: Attend « Attend LLC

  3. Bill says:

    Greetings Chuck,

    Very cool article – you’ve pointed out several services that I’d never heard of that are looking to answer the large file transfer problem more elegantly and hopefully less expensively than last year. As you mentioned, some companies are gaining access to nice 100 mb/s pipes and even they need UDP pipeflooding solutions even to gain full use of their high speed connection. Regular TCPIP is so chatty that 100 mb/s pipes may throttle down as low as 6 mb/s for long-haul sends.

    The main question is “what problem are you trying to solve?” We’ve seen many companies who feel they need to send uncompressed HD footage around to the director / client / VIP so they can “see” it better. Unfortunately, really large files can mean really slow download and incompatible playback (iPad?), when a nice h.264 reference file would do just fine.

    Loading large files into a Hub/Spoke system is cool as a temporary stora
    ge solution, but ultimately the best way is to have clients be able to direct transfer files over an accelerated p2p protocol. Unfortunately, this type of service is extremely expensive with current UDP accelerators who have no financial incentive to “give it away.” Opensource efforts in this space still seem relegated to University projects.

    • chuckstormon says:

      Thanks Bill. You raise an excellent point about compressed vs uncompressed content. The problem we’re trying to solve is both. Uncompressed content has its place, but you’re right that most of the time an H.264 or JPEG2K will do nicely. Over the weekend a customer had 3 – 6GByte H.264 files to ingest to a distant FTP server. The existence of the Content Ingest Network allowed them to meet their client deadline, avoid late penalties and more importantly keep the faith of a huge client. So as you point out it’s not just about multi-TByte files, but when it is…no problem there either.

  4. Matt Miller says:

    Bill,
    Good points raised in your post.
    – UDP solutions can get expensive, both in terms of cost of the solution and the support required for deployment.

    – TCP is an inefficient animal, especially over long distances. The expanding global nature of business exacerbates the latency/loss holes in TCP.

    – The key take away imho is enabling users to transmit/download the files they WANT to see as opposed to what they are LIMITED to. This opens the way for new workflows and business models which can be monetized for new recurring revenue streams.

Leave a comment