TVN Tech | Michelle Munson Opens New Chapter With Eluvio

The networking tech veteran’s latest venture aims to replace traditional content-delivery networks and their myriad copies of different files suited for different devices with the Eluvio Content Fabric, a blockchain-based software service platform. She talks about Eluvio and how she sees the media technology landscape heading into IBC.

After revolutionizing video file transfers with Aspera, technology pioneer Michelle Munson is seeking to move past files altogether with Eluvio, a new company that is using blockchain technology to overhaul internet video distribution.

Berkeley, Calif.-based Eluvio, which is making its public debut at the IBC show in Amsterdam this week, is the second media tech startup to be led by Munson and co-founded by Munson and Serban Simu. The pair became friends working at a Silicon Valley startup before founding Aspera in 2003, based on the FASP (Fast, Adaptive, Secure Protocol) networking technology they invented.

Aspera quickly found favor with large media companies looking to send video through IP links instead of overnight mail for production and distribution applications. The company won a Technical Emmy for FASP in 2013, before being acquired by IBM in 2014 (along the way, Munson and Simu also married in 2009).

Munson and Simu continued to run Aspera within IBM until 2017, when they left the company and embarked on a new project aimed at tackling the high costs of delivering video programming over the internet to consumers for Web, mobile and over-the-top viewing. After operating in stealth mode for the past two years they have now unveiled Eluvio.

The company aims to replace traditional content-delivery networks and their myriad copies of different files suited for different devices with the Eluvio Content Fabric, a blockchain-based software service platform that creates different versions of content on-the-fly as outputs are requested, from a single source stream or file. Eluvio says its software overlay network also delivers live video globally with an end-to-end latency of less than three seconds.

While Eluvio’s premise may sound radical, it already counts MGM Studios as its first major public customer and Munson says the company will be demonstrating technology initiatives with several large broadcasters at IBC. Munson, a former Fulbright Scholar and the 2016 TVNewsCheck Technology Woman of the Year, is widely regarded as a thought leader in media technology circles and in late July she joined the board of established editing and content management vendor Avid.


TVNewsCheck Contributing Editor Glen Dickson checked in with Munson to learn more about Eluvio and how she sees the media technology landscape heading into IBC.

An edited transcript follows.

You left IBM a couple years back, in 2017, and rather quickly started another company in Eluvio. What specific opportunity did you see in the media industry that was not being addressed?

We started working on this technology in the summer of 2017, and the driver for it really was the emerging scaling problem of video over the internet. That’s scaling in two dimensions. One is the technology starting with what we think of as classical distribution through CDNs, file-based versions of digital content or digital programming.

And the other piece is the economic side, which has become really imbalanced away from content providers and which today is largely controlled by internet distribution. And so the thought was that if we did some really innovative work that is now possible due to some changes in technology — particularly in computing — that we could fundamentally get at those aspects of the problem. And that is what the concept of it is really all about.

And you have been working with a lot of the same people who were at Aspera, correct?

Well, our team is a very interesting mixture. We are about two-thirds of the people that were at the core of Aspera, and one-third in new areas. Those new areas are outside of distributed computing and large-scale networking, which we really have at our intellectual foundations in the core team at Aspera.

The other foundations in this get at machine learning, foundationally and also blockchain technology, and our team is literally composed of leadership across that. We are all very hands-on. So this is very much an engineering endeavor, and I think it has allowed for just a phenomenal innovation that intersects those areas.

Can you describe how Eluvio’s core technology works and how broadcast customers would use it, how they would buy it and how they would deploy it?

The origins [of Eluvio] really fall out of what we would call content routing, and the origination of the idea is to be able to enable the internet with intelligence for routing and distribution of content that’s natively about the content, rather than post-based approaches to networking. So that is the origin.

Where blockchain comes into play in this is as a highly scalable decentralized software fabric for high-quality media distribution. That’s basically what it is. It gives us the decentralized data protocol. We are not using external blockchains; we built a blockchain mechanism into the functioning of the fabric for effectively managing the state, and that gives us the power to do a lot of things.

One of the most obvious things is that content is versioned inherently in the fabric, which means that those originating versions are repurposed and the changes to those are also transactional, meaning you could monetize them, and they are also recorded into the ledger. All of this happens as part of the distribution process.

The architecture of this is quite different. It doesn’t use files inside of the fabric function. It uses a new native representation of content that is really efficient, but this all happens inside the fabric logic. For broadcasters, studios — really any size company that is trying to distribute premium content — they effectively ingest it into the fabric — files, packages, streams, what have you, just as it is — the fabric distributes it and composes, on the fly, as requests are being made at the output that the consumer needs to get.

The first deployments that we are talking about publicly are for adaptive bit rate streaming, DASH, HLS, etc. to Web, mobile and TV platforms, and VOD services, and also live streaming, which has very low latency from source to consumer. And then ways of using those to gather in new hybrid experiences. And our first customers are Tier One companies doing exactly those things.

When we talk about the CDN model, it is well understood that in order to serve content more quickly you have copies of files residing out there on servers in different strategic locations around the world depending on where you have the most people seeking that content. And that cuts down on the latency problem just because of physical distance. When we think about this blockchain content-centric system, where should we think about content residing?

Everywhere — but everywhere not as files. You described the problem [with CDNs] perfectly. Every version that is consumed is ultimately made into a file format, sometimes decomposed into segments, or different bitrates. Those files get pushed through those networks and cached at the edge. And the duplication of that process works its way all the way up the supply chain through the file-based workflows that sit behind that.

In the CDN itself, that’s what creates all the density of bandwidth usage that is so expensive, and it is also what makes things slow in interactive experiences because that caching doesn’t work when live manifest files have to change constantly. They are always refreshed, and have to be retrieved from their originating points.

So with all of that in mind, the content fabric is utterly different from that. First, what we think of as all those output versions and all those copies don’t live anywhere. The source data is the data layer of the content objects, and that is spread around and charted and the output variants are rebuilt just in time for on-demand [delivery] for the client.

The second point is, none of that core bandwidth load happens, because those versions are not pushed over and over through the core of the network in order to populate all those edges. Which is orders of magnitude different in core bandwidth use — it is much less expensive to do and also more timely.

And then finally, it puts the burden, the work, on the cheapest resource, which is compute these days. And then we’ve put a lot of work into that resource to make just-in-time computation as timely and as compute-efficient as possible. So it’s really an inversion of the way things have been to date.

The other value, of course, of that stream is that the reusability and efficiency helps all the way up the supply chain. Because there is a lot of complexity and cost that goes into making all those versions a priori and storing them and keeping the metadata in different databases and trying to reconcile it between them. And in this world, the media, the metadata and the code, they are all part of the object and they are with it for the duration and can be repurposed in all these different forms.

So if we are talking about a half-hour show that a consumer was going to stream on-demand, they have requested it from’s site and they want to watch it, how does that content get sent to the consumer?

Think about what the derivative of that show actually is. There is some master-level content, either it is in a file or it is in a playout stream. And that master-level content is combined together with other assets, maybe graphics or ads, etc., when it is actually served.

In the Content Fabric, what lives in the fabric is the content object that is effectively a mezzanine-level version of that source, and there are various supporting assets as a part of that content object. If it is live, those are created in real time as the live stream, the live edge, is played out. That was very important to timeliness. That gets created into an asset, and the referential structure of that asset is part of the charting of the content around the fabric.

And then there is a routing system that finds those parts in a just-in-time AV pipeline that we created, that we built from software from the ground up, that assembles those together in real-time transcoding and that has no additional latency as it is being served.

So the intelligence comes from the functioning of the software to do that retrieval and that assemblage. As far as the placement, the nodes can run anywhere. The efficiency of this means the network is actually small, especially in terms of what we think of as today’s consumer-scale distribution networks.

So instead of copies, as a piece of content is ordered, it gets assembled on the fly?

Yes. As it is requested to view, right, as it is requested for output.

Eluvio Content Fabric

When you talk about latency, what sort of benchmarks can you give for the latency improvement this approach brings compared to CDNs?

We’re serving 4K two-second segments and the time to first byte is in the tens of milliseconds and the segment arrival time is coming in at a third of the two-second deadline or less, so on the order of 600, 700 milliseconds or less. So that gives you a tangible view of what we are talking about in a VOD case. In a live case, because the fabric has a fully just-in-time pipeline, effectively it’s negligible latency.

In hard benchmark terms we are talking about under three seconds globally, from source to consumption. You can compute it based on the propagation delay from the source stream through the network, and there is no additional latency due to how the fabric operates. And that’s comparable to what we think of as today’s distribution workflows, where ultra-low-latency is attempting to get under 10 seconds consistently.

So how does a programmer work with you? How do they implement this technology, and how do you sell it?

It’s very straightforward. We run this as a software service platform, that is the running fabric which provides end-to-end distribution. The basic process is to ingest the media into the fabric into a space (which is a semantic “space”) that the content owner owns and is also part of the security model, and the ingestion is either file-based or stream-based, either one. That can be done from an API or Web interface, it’s part of the fabric, and then the distribution is inherently available globally. Offerings are based on viewed minutes to start.

We’re obviously with our first customers, but our first use cases are serving at adaptive bit rate packaged as final streams that have standard DRM and are suitable for browser, mobile and TV Everywhere kind of devices. So that would be the Apple TV, Android classes, Google devices, and then also Roku for example.

On the live streaming side, it’s the same principle on the output, and then on the live inputs you essentially configure the input target for the fabric. That could be either RTP [Real-Time Transport Protocol] or some type of HTTP packaged stream source, and typically it is higher bitrates and that is because the fabric works off of the highest-bitrate source.

Are there any other partners you are working with, and does any of this rely on a cloud platform like AWS?

No, not at all. It is fully compatible with media that is stored there. So the ingest can also refer to media that is on cloud storage in addition to standard storage. But no, the software does not rely on or utilize any cloud service technologies.

Who are some of your early customers and how exactly are they using Eluvio’s technology?

MGM is one of the media companies that found us early on and started experimenting with the fabric, and started to use it under production SVOD services. And they have done that, they were very happy with it, and now they are expanding that and making it part of their whole supply chain innovation effort. Currently we power some of their SVOD services with the fabric.

So what did that mean? We had to slot in and replace what was before a back end comprised of an aggregator and a CDN stack, with transcoding of course. That was an existing property. So the front end and the CMS and the authentication for the end users, the subscribers, couldn’t change at all. It literally changed over without impact to the subscribers at all, it had to.

And that has a multiplatform audience like I described, an audience in several places around the world. They see that this has potential to have efficiencies all the way through their supply chain, because the source titles are not only usable in SVOD services, but in all of the other ways that they do sell-through.

And then also, it gives them the ability to really exploit the new things that they are doing to move to IMF-based (Interoperable Master Format) models for editing. This really pushes that philosophy through the rest of the distribution.

As the media technology industry heads into IBC, how do you gauge the overall financial health of the industry? There has been a lot of consolidation in the last couple of years and a lot of changes in how people sell technology. Where do you see things in September 2019?

Well, I think we are at an absolute tipping point. It is one of opportunity and also risk, and that is exactly why we are trying to bring this to market. Because we think that content providers must have this kind of movement in technology, otherwise the risk is too high. And vice versa, the opportunity side is bigger than ever because of the appetite for video communication in every single form.

Evidence of this is probably obvious to everybody. It has led to mass consolidation in Tier One companies, which is happening all the time. There is an arms race on original content production, and right now it is very much financed by profits from other parts of their businesses or overinflated stock prices. And that is not sustainable in its present form.

Finally, the appetite side is the best ever, but it’s colored by the fact that consumers are still in a bit of a crisis around how the economics currently work, because [their] data is exploited in it, to the point that it is now discussed in Congress a lot.

So that is why it’s for sure a tipping point. It’s one of opportunity and risk. And we think it is absolutely necessary to get some technology innovation that allows for content providers to have control, and control in a good way. It is good for the viewers’ data protection as well.

You joined the board of Avid a little over a month ago. I remember you started an ISP when you were in college, but Eluvio is your second real start-up, after you had a stint at IBM running Aspera. What is it like now to be founding this brand new company and also working with one of the oldest media tech vendors?

Well, first of all, give credit to the folks that instigated that, and that is the leadership at Avid. So Jeff Rosica and the current board is really serious about taking the company forward. As they described their goals to me, that came through in spades. Avid has one of the oldest legacies of all the modern “tooling” companies in media, and I think they are very self-aware of what they would like to do to grow in response to the way the market is.

That is what led to it. I am new to the board and so I don’t have a lot of experience under my belt, but I think that that is very sincere in terms of what the company is aiming to do, and I am thrilled to see that.

You are correct. I don’t have the sensibilities of an established legacy company. My sensibilities are in innovation. But that is exactly why I think they asked me to take part, in trying to help in that capacity.

When we talk about the experience that you bring to the table, you do have a lot of experience in the production realm and the same customers you worked with at Aspera are a lot of the same folks that Avid is working with on a daily basis.

Of course, Avid serves the backbone of the media industry. I think also Avid has smartly recognized the internet-scale appetite for video, that there is a much broader audience in addition [to traditional media] that is keen to be part of making and distributing video. Both sides of those have gained priority, both Tier One and the growing market at large.

I do know the customer base of Tier One media quite well from my personal experience, I have been doing a lot of work with those companies. But I also try to have a view toward the way that the internet is shifting around video, and I think Avid’s view is to try to see both and to be very customer responsive.

And I know that is challenging in some parts of Avid’s history, but at the same time, that is quite sincere in the current group. And so those are the pieces that I want to stress.

Comments (0)

Leave a Reply

More News