The No Execution Podcast

Shared security: Tradeoffs and its future with Zaki Manian - Ep 1

Episode Notes

We sit down with Zaki Manian, Co-founder of Iqlusion and Sommelier, to discuss the future and tradeoffs of shared security with Mustafa Al-Bassam and John Adler of Celestia Labs. Moderated by Ekram Ahmed.

 

Podcast platforms

Spotify

Apple

Google

 

Timestamps

0:00 Start

3:45 Introduction to shared security

7:45 Mustafa’s thoughts on shared security

10:52 Zaki’s comments

15:03 John Adler’s POV on shared security

21:04 Introduction to Celestia

22:45 Rollup narratives

28:13 Interchain staking compared to Celestia’s shared security

36:15 Data availability

48:44 IBC

53:27 Shared security tradeoffs 

57:20 Governance and app-specific chains

1:03:44 Bridge security

1:08:45 Modular blockchains

1:10:50 Differences between Polkadot and Celestia

1:16:37 Celestia light clients

1:19:25 Optimint, Cevmos, and bootstrapping new chains

1:22:39 Problems with data availability

1:25:21 Sovereign rollups and sovereign communities

1:29:14 Closing thoughts

 

Social links

Zaki's twitter

Mustafa's twitter

John's twitter

Ekram's twitter

 

Resources

Learn more about Celestia

Join the community discord

 

Intro & outro music

Episode Transcription

Zaki welcome. I think we have our folks here. Zaki, how are you happy to be here, dude? We're so happy you joined. So, everyone welcome. I think we can get started. I'm going to lay the stage for the conversation and introduce our speakers. And then, so just bear with me and then I'm going to shut up and let the dialogue just flow.

We are all convening at an interesting time. Shared security is one of the hottest topics of the moment. For those unfamiliar at a really high level shared security is this idea of multiple chains that inherit some security from a common source. There are different methods by which shared security can be delivered.

Hence there's just a lot of activity, consequently buzz at the forefront of this concept. So just a lot of news, for Cosmos we'll be introducing interchain security, which we'll definitely be talking about. And Polkadot, I think we've all seen, has introduced parachains which share the security of the relay chain.

So the world is clearly moving in a direction of shared security for better, for worse. And thus at the center of the movement is a growing and intensifying debate. There are proponents that disagree with the idea of shared security, claiming that it introduces security risks and trade-offs that outweigh the benefits.

And on the other hand, there are staunch supporters of shared security who believe that it is necessary for change to communicate more securely in a multi-chain landscape to help us make sense of it. All. We have three of the brightest technical minds in web three, who all have pretty strong opinions on this topic.

And hence, we brought them all together. John and Zaki we'll real quickly go through introductions for those, who are joining, who don't know these folks. We have Zaki Manian, co-founder of Sommelier and iqlusion, and someone who has contributed significantly to Cosmos.

We have Mustafa Al-Bassam, previously co-founded Chainspace which was acquired by Facebook, and is now the co-founder and CEO of Celestia Labs, which is quickly becoming known as the first modular blockchain network. And then we have John, the creator of optimistic rollups previously at consensys and now a co-founder of Celestia and fuel labs. Gentlemen, welcome, how are we all doing?

This is a fun topic. I don't think people really realize that this is not a new topic. This has basically been like the central topic of blockchain protocol design for since like 2015. It just seems that the number of people who are interested in talking about it has suddenly exploded. I feel like in many ways I think the other, you know, the two things that have changed since 2016 is that these systems are much less hypothetical than they were in 2016.

So we have real systems and now we have real users interacting with those real systems. So we're starting to get some empirical evidence, like a lot of these questions about shared security I think are just hypotheses around how users and markets will form on top of blockchains. And while we're certainly not in an end state yet, we at least have users in markets that have formed, so we can start to ask questions about.

Okay, so let's get into it. Zaki. Why don't we start off with you? There's a lot of non-technical folks here and like I've gotten the scores of DMs about the overall hierarchy and the idea of it. Can you quickly introduce the topic of shared security further?

I'm going to try my extremely non-technical, vision explanation of, of shared security and see if it works and also see if Mustafa murders me for explaining it wrong. we'll see.

Okay, so here's my central point of view. The whole question of blockchain security really comes down to this question of who can rug you, who in the system actually has the ability to rug you for your money to just disappear into nothing. For anyone who's been interested in interacting with the DeFi or NFT ecosystems you're starting to get a visceral sense of what a rug feels like. It's oops, my money's gone. Right. And the basic question that follows is shared security. So like in any blockchain system there's a lot of asymmetry. Asymmetry is about how and where can you get rugged? There can be a bug in the system, like the software developers could make a mistake.

And the base layers protocol, the smart contract author can either intentionally backdoor the system or have vulnerability in the system. The team can use governance tokens or a multisig to screw over the project. There's a lot of ways in which a project can get robbed. But one of the, like the central question that is at the heart of shared security, especially as we sort of have transitioned into proof of stake, right. Can the validators rug you and how many validators have to work together to rug you, is basically the question of shared security.

And for the question of shared security is basically in the shared security vision. And every time people try to build shared security, what they're basically aiming for is a system in which the number of validators that have to collude to rug you on any sort of scalable blockchain system is equivalent to the number of valiators that have to collude for the chain to fail in a major way. So trying to make a scalable system in which the same groups of people have to collude to rug you, or the same validators have to collude,to rug you and the people who are trying to build who are, who have non-shared security systems that are scalable and interoperable.

I've been willing to say, Hey, like it would be okay if part of the system fails because like this set of validators can fail and the whole system as a whole continues marching along.

So you know, is it okay? You know, is it acceptable to live in a world in which like the polygon bridge operators or the polygon validators that can essentially steal all of the funds that are bridged over the Ethereum polygon bridge? Is that an acceptable outcome world or a completely unacceptable world?

And that really is the central question of shared security. Mustafa, let's kick the ball over to you. Please share your high-level thoughts and your general point of view on shared security.

Yeah, I think that's a good. I would contextualize it within a kind of multi-chain ecosystem. And so to get into more of the details of what shared security means in practice. This is very applicable to systems where there's multiple blockchains that each have their own state machine.

So if I see them right now, for example, Ethereum is just one chain with one state machine that executes everyone's contracts. But recently people have realized you actually have to have a multi chain ecosystem where there's multiple blockchains because you just can't scale web3 with a single state machine that processes everyone's transactions.

So you effectively have to shard execution. And to shard exection or to split execution across different nodes or different chains you basically need a multi chain ecosystem. Multi-chain ecosystems come in different forms. A few years ago there was kind of two main kinds of multi-chain ecosystems.

There was this Ethereum 2.0 sharding, back when a Ethereum had their execution shard where each shard was its own chain. And then you had more of the Cosmos style of systems where it's similar to sharding, except that there is a predefined set of shards. It's more like anyone can create a shard and in more flashy ecosystems, it's important to have composability or interoperability between those chains.

So let's say you have chain A and chain B, and let's say chain A sends funds to chain B, or a user transfers funds between those two chains. If one of those chains happens to be insecure or has an insecure validated set  then they can potentially steal these funds that have been transferred into that chain from other chains.

And that kind of potentially creates a systemic risk to the ecosystem. So what shared security says is effectively, if you have, let's say two chains or three chains, how can we make it so that in order to break one of those chains, you have to break all three chains. So that's effectively, like from a high level of what I consider a shared security I have to be in it. And it's kind of the most basic form, I suppose.

Thanks guys. That concept of the central question is. It's really interesting. Zaki do you, how do you comment on what Mustafa just said? Do you agree with them or are there any parts that you disagree build on top of it?

Yeah, no. I very much agree with stuff, framing. I have tried that framing myself and sometimes it seems to work for people and some people, sometimes it doesn't. I would also say that basically until Celestia came along the idea of a very scalable system of multiple chains with shared security, was extremely hard.

And one of the things when I started working on this problem in like 2015, I was like there were a lot of people who had basically been like. Okay. I want to build a multi-chain ecosystem which has this property. and this was a central topic of conversations or in the early days of Ethereum 2.0, Polkadot, and Dfinity.

All of these things were enormous amounts of effort that were put into how do I build a system where you have multiple chains, but if you break one you have to break all three. And really what was sort of different about Cosmos was that Cosmos was really about building a system in which you can break one chain and the cascading failure in some ways is understandable and limited by the other parts of the architecture of the system.

But people will still definitely get hurt. So, then Celestia came along and Celestia was like okay, here's a system that is plausibly as scalable as Cosmos but can actually have this property of where you can't have a sort of state machine failure where interoperability basically gets broken, and the cascading economic effects, sort of propagate out of it.

I mean, we all got to see this very vividly in the wormhole hack that just happened, where it was a smart contract fail and not a failure of the validator set, but you end up with, you know, 120,000 forged ETH bridged over by subverting a piece of the bridge system.

And then cascading economic disruption sort of emerging from that and was saved only because, you know, a single investor jump was willing to step in and mitigate the loss of funds. Right. This is the kind of economic disaster that shared security proponents envision happening.

In a world of interoperable non-shared security blockchains, where you have these like essentially economic crises that emerge when there is when a single bridge or a single blockchain fails. And the dream of shared security is a very scalable infrastructure in which those kinds of attacks are sort of mitigated

John, welcome back. So we were discussing our point of views on shared security and the central question that underpins it. How do you see shared security? What's your point of view and what do you think is the underlying question? 

Yeah, so generally agree with, but most of them said to, to no surprise. I did want to point out a few interesting things. one on Sunday. Well, it may be true that for instance, the wormhole hack wasn't the result of a failure in the validators set, if you want to call it that. Aand it's pretty rare to see cases of blockchain to blockchain bridges that are compromised due to validator sets being compromised, as opposed to smart contract books.

There are many cases of blockchain to centralized exchange bridges being compromised by a blockchain’s either validator set or miner set. Ethereum classic is a pretty big one from the past few years. I think they had maybe three or four, you know, very large re-orgs.

And from the perspective of the bridge, the fact that the other side of the bridge is essentially an exchange instead of another blockchain. That's relevant. It's an implementation detail. This is like a very clear example that block producers of a blockchain that can and will attack a bridge. It was not trust minimized that happened with Ethereum classic and it will happen eventually, but it has only happened on some smaller chains and we should see it eventually happen to larger chains.

I've been waiting for this to happen for so many years. I'm very excited to see it happen. as far as I know it hasn't. I've tried to pay people during game stakes and game zones to do these attacks, but to no avail. I look forward to them happening. So I think the whole scenario of it happening is less likely to be the validator set acting purposefully malicious.

I think the more likely scenario, what might happen is more in a network upgrade that he is not agreed on. Let's say for example, there's some popular chain and let's say the terrorist group uses a chain and the community might advocate for locking or it's freezing or potentially reassigning the funds in that chain.

And, that would require a hard fork plan because that would potentially, that would potentially violate the situation functioning of that chain. But because the bridge itself doesn't actually verify the structures, the state transition function, it will just follow the validators.

Even if it's like a potentially controversial network upgrade. So I think like long term or even short term, that is more likely the kind of attack or failure scenario effectively network upgrade that like the most of the community you might not agree on, but the valves have been compelled to do, let's say by, I don't know, law enforcement, for example, or other parties.

Zaki can you please comment on what Musafa just said? I think what he said was interesting, but curious how you see it

I definitely think of this as stuff that I'm somewhat excited to see starting to play out in the real world. So there are a bunch of layers to this question that this makes me think of. One is that we still don't really know what a controversial fork looks like on top of a chain that has deep has like a significant amount of DeFi on it. because DeFi has resulted in a world in which there's a lot more like infrastructure players, including stablecoin issuers and oracles and all of these other places that are like big parts of this question of how does a fork happen?

How would an invalid state transition be processed? Would it need to be reviewed? How do I get a controversial, hard fork? And we, I, we have yet to see a controversial, hard fork on top of the chain that has DeFi on top of it. So there's like a certain amount of mega unknowns, about this.

And I think this is a little bit of a why Celestia may be an important contributor in this world where I also think the Cosmos interoperability model has a lot of virtues, which is, it may be that like the risk of a controversial hard fork to an ecosystem of DeFi on top of is so great that it's like intolerable to not have like further assurance that an invalid state transition what happened.

We're talking a lot about Celesta real quickly and with stuff like for the, I'm seeing a lot of chatter on Twitter. Some folks don't know about Celestia, can you quickly introduce Celestia and tell us about it? So Celestia  is what is described as a kind of the first modular blockchain network. But the core product is a pluggable data availability layer.

So you can think of Celestia as basically a very simple blockchain or a very simple layer 1  that only does the core things that a layer 1 should do. And that is the ordering messages and making the data of those messages available. And if you have those two core primitives then you can provide shared security across two different chains. And as long as those chains are using the same data availability later, then they receive shared security.

Awesome. going back to what Zaki said about mega unknowns, Mustafa, John, any comments on this? Do you guys see any mega unknowns as it relates, in the context of shared security?

What do we mean by mega unknowns? That if we play this out from the near-term to the long term, any variables that you think could derail the shared security vision?

So I think, like there's two kinds of, there's two different narratives for shared security, or why shared security, and more generally, why rollups are needed. And so recently the failure of the narrative for why rollups needed is security, or shared security, in the sense that you should use a rollup, even though a rollup is still quite expensive, like a few dollars per transaction depending on what rollup you use. You can still use a rollup because it's more secure with a trust minimized bridge as opposed to using some other chain. Then you have EVM chains like Avalanche and Polygon that have 1 cent transactions. 

So the risk I see there is that like in practice users don't actually care that much about the trust minimization. They care about transaction fees, especially when the transaction fees on different chains are 100x cheaper. And then it's almost a no-brainer for users to take that risk of the bridge because they're getting 100x cheaper transaction fees.

I think if we really want to kind of have a shared security and ecosystem at the actual narrative, we should be pushing is that you should use rollups because it's easier to deploy new blockchains as rollups then deploying your own blockchain with its own validator set and, deploying blockchain for your application has better benefits than just using a smart contract on the same blockchain as everyone else.

So like for example, in the future, I could see potentially specific blockchains, or specific rollups for, let's say some DeFi game or some NFT game might have its own roll up. And the chain does all the execution for that specific game because the game doesn't really need to compose that much with other smart contracts.

So there's a really interesting question here. And the question is, are rollups a useful tool primarily to users or are rollups a useful tool primarily to developers. and I think this represents, sort of an interesting question here.

And I think like we're currently in a very interesting moment in this history and story of the initial sort of generation of rollups that sort of started to get deployed this year. They were all targeting the EVM as their virtual machine and sort of implicit in the idea of, I'm targeting the EVM is very much about targeting users and not developers.

But I'm certainly not targeting anyone who wants to build a custom blockchain or the Cosmos SDK style of application specific blockchains. And you know, we built the Cosmos SDK, with this in mind where we're hey we think that there is a substantial number of developers who will want to build their own blockchain at some point. And there needs to be a toolkit for them. And so we built a toolkit for them. but right now there's a question of the potential of something like a Cosmos SDK kind of thing for rollups.

And now that Cannon has been like open source, which is Optimism's sort of next generation MIPS based, which is like a sort of general microprocessor targeted environment. You've started to see like the first base layer of what could become the Cosmos SDK for rollups. And Arbitrum when nitro comes out is another alternative, which is WebAssembly based for building a Cosmos SDK, for rollups.

And as more stuff in the zero-knowledge world gets open sourced we're going to start seeing the capacity to build a Cosmos SDK for rollups. And I think the possibility of that evolution to me seems as significant as when we were developing the Cosmos SDK back in like 2018.

Let's talk about the Cosmos hub interchain staking model. How does that compare to Celestia’s model of shared security, Mustafa, Zaki. Whoever wants to go first.

I mean, I think there's actually an, I agree on this, but we've discussed four, but I like the interchange thinking module a, I see the interchange thinking module. And so that's shared security model having different purposes. And so I guess to recap, the Cosmos staking module effectively works right now, or at least it works the way it works in the V1 is that you can set up a chain where the validators of the Cosmos hub can also validate your chain.

So effectively the Cosmos hub validators are also validating other chains. So this is quite similar to some extent to the post-shared security model where there's parachains and the relay chain takes an interest in the validating of those parachains.

And, the problem with this bottle is that it doesn't scale to the same as thousands or millions of chains. It scales to like an order of magnitude and maybe like a hundred chains, which as you know, Polkadot has to auction off its parachain slots. There's obviously a limit because the validators can't validate every single change in the world now.

So that's yeah, like our aim is we see a world with millions of blockchains. So we're kind of catering towards the kind of tail end of blockchains. Imagine you can just deploy a blockchain, like maybe you're still a small project and recruiting the Cosmos hub validator set is too expensive.

And then you can just deploy on Celestia for data availability and consensus. One interesting part of that is if your rollup does not initially have a lot of activity, then it's not as if you have to post blocks on that chain, like every single epoch. So you can just post blocks as long as they need to.

So it's like you pay as you go availability and ordering. Like I said before, I think the interchain staking module is kind of useful for as a value accrual mechanism for the asset token. I see potential where the Cosmos hub might want to deploy or launch its own settlement layer for rollups. So they might deploy a specific hub and then there's two chains in parallel, but they're interchain staked with each other. So it's kind of like a way for the hub to have a suite of chains,

Zaki, same question. Will you please, I agree with everything Mustafa has just said. and I think the biggest way of thinking about it is, Cosmos shared security is a sort of expensive solution to the deployer of a chain that hopefully gets them a business benefit, which is like this tight partnership with ATOM holders, and a solid value accrual mechanism for this tight partnership with between this like new chain and ATOM holders.

And hopefully this gets an opportunity to create value and it coexists in an ecosystem where, you know, you can start a validator set with three machines on AWS and just go from that. And so shared security is like a very specific solution.

I think one of the other things we’re sort of exploring a little bit more is or is that like my expectation of the next few years is there's going to be like the Celestia SDK, or I don't know how we're going to brand this, but there's going to be this like rollup SDK that emerges out of the atomic components of some of the like zero knowledge proof work and the MIPS work on cannon and what done on Arbitrum nitro.

And there's going to be this like emergence of a new developer toolkit. and my expectation is, is that there is going to be sort of parallel optimization of both the like Cosmos and the Cosmos stack of like the next generation, interoperable blockchains, And parallel optimization to the modular architecture change.

When the Celestia base layer evolves in a much more scalable direction, but also these SDKs for building like fraud provable state machines, or validity provable state machines start to proliferate. and I expect these things to happen in parallel for quite some time.

Interesting forecasting most often. Do you also see the same? Do you also expect the same as what Zaki is alluding to, or do you see it differently?

Yeah, I mean this idea of, optimistic rollups that can do fraud proofs on any arbitrary computation and it's really powerful and the idea of interactive verification games in general, that you can potentially deploy any kind of computer programming to MIPS. That is very powerful. One thing I want to try and figure out how to do is like deploy Cosmos SDK zones, for example, using cannon and MIPS.

So I've already managed to compile the Cosmos SDK but there's still modifications after you’ve gotten the oracle working. But the idea is if we can actually deploy a Cosmos SDK then that's very powerful because anyone will be able to deploy a Cosmos SDK zone and instantly have a trust minimized bridge with a settlement layer without having to worry about creating a new validator set.

Thanks. Let's talk about the availability, the question to both of you. Why is data availability essential to shared security Zaki? Why don't we start with you?

So, there are a couple of things in which data availability is designed to solve the problem for. I typically like to explain this in terms of the optimistic rollup model and the zero-knowledge rollup model. But one of the most important things about these systems is who can propose a block and who can verify the state.

So inan optimistic rollup model, you need to have both a broad set of people who can propose and who can verify this. Because you need people to detect if there is an invalid state transition and then play the interactive verification game.

And Then the second, in the zero knowledge world, the zero knowledge proof provides proof that like every state transition is valid, but you need data availability to enable other people to propose blocks. Otherwise you have like a con you end up in a situation where a single centralized party with consensus.

So really the core properties of a blockchain is censorship resistance, and auditability is entirely dependent on this actual property, which is data availability. If you want to operate in like the sort of hyper scaled million blockchain world. and that's why data availability is like the fundamental problem of blockchains.

Yeah. So, I mean, it's an interesting question because it sounds very counterintuitive and that data availability is like the core primitive that blockchain provides. Like we've seen the Ava team saying that data availability is just some random problem.

It's that they don't understand why it's even relevant in the first place. So it's very counterintuitive to understand why if you've been looking at blockchains in the old model that Bitcoin introduced, where consensus and execution are coupled together. Effectively, that's where you have a model where the consensus nodes don't execute your transaction.

Then it starts to become clear why. So, before Celestia was created it was originally called Lazyledger. And the reason why it was called Lazyledger was because I was trying to figure out, what is the most minimal minimum viable blockchain you can make, and what is the least amount of work that consensus node could do for that chain?

And how much, if you had an extreme, like how much computation and work could you actually push to the end user light clients and make it so that the non-consensus node knows how to do it themselves? And a conclusion I came to you was, if you wanted to take this to the extreme, and you wanted to do a version of Bitcoin, you know, with this, it basically just needs a chain where you can dump the data and every single transaction is allowed.

So imagine a version of Bitcoin, for example, where invalid transactions are allowed to be posted on-chain, like you can steal people's money on-chain, like how would that be secure? It will still be secure because you just insert a rule on the user's node to say we will simply ignore invalid transactions that have been published on the chain.

Let's say there's two transactions that are trying to do a double spend, and there's two transactions that are trying to spend the same coins. And then obviously you just ignore the second one, but in order to know which one came first not only need do you need ordering over the transactions, you also need data availability and the complete set of transactions that have ever been posted in order to know which one is the first transaction with a specific property, like for example, which is the first transaction that spent this.

So, this is kind of intuitively why, if you look at it from first principles why data availability is kind of a core primitive of blockchains and this isn't anything new by the way. But even back in 2014, before Bitcoin developers were kind of arguing about this and discussing this. Where was a mailing list post where Peter Todd was arguing with Gregory Maxwell about what a blockchain fundamentally is. And Peter Todd was the first person to realize that a blockchain is fundamentally a data layer and he refers to it as a proof of publication system. Effectively you're proving to people that you've published data.

I want to pull on the rope of data availability more Zaki back to you. Any other strong opinions on data availability? Tell us more about how you see its role.

Okay. So two things that are worth mentioning. So one of the things that I think is under-appreciated about Nakamoto consensus in proof of work, is that it economically incentivizes publishing blocks, as long as the concentration of mining power is below the selfish mining's profitable limit.

And so like one of the things that I think Satoshi really, at some fundamental level in designing the original Bitcoin consensus is that it needed to provide economic incentives because if you basically like mine a bunch of blocks and publish them widely the likelihood of people building on top of those blocks is low and you never get the block rewards for this block.

Those block rewards are, you know, are invalid and not on the longest chain, so you lose your block reward. So, you create this like economic incentive system for miners to push their blocks out to the rest of the mining pool, to other mining pools and to make their blocks available at the lowest possible latency to the point where now we have like the fiber network and the Bitcoin network, almost never forks because, you know, there's both infrastructure and economic incentives for miners to make their blocks available, like super fast and super efficiently.

And I think it is why one of the things that in pre-blockchain, BFT research, they just did not understand this problem and did not understand why it was important to solve it and like failed to produce a solution to these things that Satoshi sort of somehow intuited, that there was this like missing idea in open network, public consensus, which was that you must publish your data otherwise you lose money.

Which I think has been the deeper insight that is frequently lost on, sort of classical BFT researchers. So that's one digression, I think the other thing that is sort of important to understand is the limit of computation that we all currently experience.

How many cores does my computer have? How fast is the CPU, but really what is our limitation on how many transactions a blockchain can process and the IO bounds? Like how much memory does it read? How much disk does it write?

You frequently hear about this in the context of like Ethereum scaling that even if we made a theory of consensus super fast, we would still have this limit of IO and by decoupling data availability from this process where you're publishing and making transactions available, but not actually running the IO that is required to process the transaction.

This is the only possible way in which we can truly achieve petabytes of block space. Where the way a minority of the node operators for any given state transition or any given blockchain are actually processing those transactions because they care about it.But the underlying transaction data is available to the entire world. That's like how we get it's like the only plausible path to like petabytes of secure block space as a strategy.

Mustafa, can we get your thoughts on the two points that Zaki just made?

Yeah, I genuinely agree. In general, I think Bitcoin got many things right that previous academic research hasn't kind of realized. And in general, I find that traditional distributed systems researchers have a lot lost on them when they look at blockchains because it's a completely different model and they often may try to apply traditional distributed systems and concepts on blockchains, which most of the time works and is necessary.

But sometimes, it does not work mainly because traditional distributed systems have a kind of state machine replication model where they see an honest majority assumption for the correctness of the state. So like traditional BFT systems, which is achieved like two sets of the operators or the validators in the BFT consensus are, you know, being honest.

But actually the threat model for Bitcoin is much more harsh than that. You know, the big point is that the model does not assume that the miners are honest because it assumes tha users are running full notes or there should be light clients that support fraud proofs which is referred to as alerts in the original Bitcoin white paper.

Let's transition. I received a significant amount of questions around IBC. Question to you both. What role do you see IBC playing in a multi-chain world and general thoughts about IBC as it relates to shared security. Zaki, why don't we start with you?

So there is IBC as it exists as a concrete piece of software which is, you know, a Tendermint light client set of application protocols that live on top of it and this is like mutually communicating Tendermint light client world. Now IBC is a framework in which you can build lots of stuff on top of it. And so you could build a version of IBC that has data availability proofs in it. and a challenge period, or requires a validity proof.

These things can be included into the IBC model. So I think it's going to sort of represent an interesting question as to which direction IBC evolves. Does it stay out of sort of these like state machine verification components or does the technology of rollups start to actually start migrating into the IBC world?

Yeah. So to add to that, we have been looking at adding fraud proofs or zk-proofs to IBC before. So where this becomes relevant is… So first of all, it's a common misconception that rollups must post fraud proofs on-chain. Ethereum rollups work that way because if they have an enshrined bridge between the rollup and the Ethereum chain. And they post it to the main chain because they implement a light client for that role as a smart contract.

But there's different ways you can do that. Instead of posting the fraud proof to some chain, you can actually just distribute it on the P2P network. Assuming that the chain you're bridging to has already embedded a light client natively as part of its code.

So this is kind of more in line with how IBC works. So like with IBC, it supports different types of light clients. But you could potentially introduce a new type of light client into IBC specifically for your rollup or for a rollup that in addition to checking that your rollup chain has the correct signatures by the correct operators it also listens out for fraud proofs for your chain.

But, it listens for the fraud proofs on the P2P layer. So, think of Mina as an example, like Mina is a blockchain that has a zk-proofs for all of its blocks, but where does this zk-proof get posted? It doesn't get posted on-chain, like a rollup, it just gets distributed on the P2P layer. And you can do the exact same thing with fraud proofs.

It's easy with single round proofs, but it gets a bit more complicated with interactive fraud proofs. But it's still technically possible and there's researchers that have schemes for it.

So we're going to transition to audience Q and A here very shortly. But before we do, Zaki just summarize the trade-offs and highlight your higher level message for the audience on shared security. Is there anything you want to leave us with before we go into Q&A?

Okay, so there there's one thing I would say for a meaningful trade-off, which is all of these fraud and validity provabl, state machines that have all of the machinery around verifying them and generating them and performing computations inside of them are all extremely new, extremely cutting edge, extremely exciting computer science.

But on the other hand, we have Tendermint which exists, and generally the space of like blockchain BFT consensus, which I think sometimes gets a little bit lost in the conversation is that the non-shared security model has the advantage of using fairly mature technology, as its underlying interop layer that has been developed over the last like seven years.

Whereas all of this stuff that is enabling shared security is very much…. So, it's interesting that I sometimes see the perception of shared security as safer now than the current software that's running IBC now, and I think the theoretical limit of safety of shared security is much safer than IBC. But the actual practical applied security of these systems in favor of the IBC model today. But presumably, at some point when these fraud provable state machines are fully….

Before Q and A,, Mustafa, please leave us with your sort of final take on shared security and your stance.

Yeah. I don't disagree with that.To me the best use case of shared security is not so much trust minimization in bridges even though that's actually very important. To me, it's about the ease of deploying your own blockchain.

It's easier to deploy a blockchain as a roll-up because you don't have to worry about having your validators then as opposed to having to bootstrap a secure and decentralized validator set. And so that's why I'm not opposed to, you know, I agree with Zaki that I'm not a trust minimized bridge maximalist.

I think people should do whatever is reasonable for that use case and whatever is the easiest way for them to deploy their own blockchain. As long as that chain has the threshold of security that they require.

Okay. So I think we had a pretty good discussion and it's time to go into Q and A with the audience where we've been roughly going for an hour. Let's field some questions. I've been seeing things on Twitter. so please raise your hands, but I'll start off with one question that, we got a decent amount of likes on and here's the question. Many applications on Ethereum such as Maker and Compound already trust governance tokens for safety, they might as well use them for consensus too. Scoping governance token permissions is a similar problem to designing application specific chains, opinion? What's your guys' take?

I guess there's several things out looking at that. yeah, I mean that could potentially make sense for Compound. Like maybe they have decided that they have a sufficient decentralization. They have a sufficient distribution among the governance tokens ]that it could be a validator network. Although I don't think that's likely, because I think if I remember correctly, 50% of those governance tokens are owned by two or three entities, one of which being a16z

Or I might be getting that next top with Maker. But the point I'm making here is that the token distribution for a governance token is not necessarily the ideal token distribution for a validator set or staking distribution.

So, that might be appropriate for Compound, but it might not necessarily be appropriate for all chains. And the second thing I would say is not all applications might be in favor of  on-chain governance. And even if they are in favor of on-chain governance they might not be in favor of governance to arbitrarily change the state machine or code or that smart contract.

They might only be in favor of on-chain governance for changing specific parameters of that protocol. Not necessarily for changing the entire state's machine or transition function for that protocol. And the last thing I would say is that my vision for blockchains in general is blockchains as sovereign communities or more generally, social coordination platforms.

So let's say, to me, the most powerful thing about blockchain is if a group of people wanted to organize something and wanted to get together for a common purpose, they could create a DAO. they could choose to create a DAO on Ethereum, but creating a DAO on Ethereum is kind of like incorporating a company in a specific country. By creating a DAO and incorporating an entity on Ethereum, you’re effectively bound by the social contract of Ethereum. And let's say that your DAO gets hacked and you have to convince if you're going to hard fork or let's say another case might be a hard fork to introduce new EVM opcodes or resource pricing that makes it unfavorable to your DAO or application.

So, the future model for blockchains I see is where each application gets its own chain. So like each chain is a community of people that have shared interests or shared beliefs. And in that model, I see hard forks as a feature. Hard farks can be used as a social recourse mechanism. The smaller, and less applications that chain serves, the more likely the community over that chain to agree with each other on upgrades and changes.

And so you can kind of have a community per chain or you use your chain as a social coordination platform to implement decisions made by your community. And you might not want to use, you know, plutocracy, whoever has the most tokens, as your government's mechanism. And I think that's like one of the things that's the most powerful about this application specific blockchain paradigm.

Zaki we'll kick it over to you before you speak. Folks, we are fielding questions here so if anyone does want to ask a question, please do raise your hand. Zaki go for it. I know Mustafa just dropped a lot for us, please comment.

I mean, I don't know what to do with it, other than it, how joyous it is when you hear someone just recite the Cosmos thesis back to you five years later.I think there is this enormous, like logic towards application specific blockchains. And I think that there is always going to be some sort of mega application blockchains that are, in some sense, going to be application specific. They may be centered around a particular set of like DeFi primitives, or like a particular DAO or something like that. There will always be big blockchains in this world, but I've always fundamentally believed that enabling this power law distribution of blockchains, where there are small, super secure community settlement engines, is the mission of this space.

There are a lot of specific software development things, but Celestia is fundamentally an enormous step forward in the very long-term vision of Cosmos. You know, we have Cosmos is a very long game, that has made enormous strides forward this year. And it is, it is really exciting to see, you know, 37 interoperating Cosmos chains, like live in the world and I expect that to be in the hundreds or the thousands and then someday we want to be able to go to the millions and that is the infrastructure that's Celestia is building.

We got a pretty specific question, I think, from Ansem. What is the difference in the security of bridges from L one L1 to L1, L1 to L2, L1 to sidechains versus IBC and how significant are the trade-offs there? Mustafa, thoughts?

Yeah, so I have a blog post about this called clusters that can categorize is bridges into two types, what I call trust minimized bridges and what I call committee-based bridges. And the bridge between, between L1 and rollup is considered to be a trust minimized bridge. And what that means is that even if the rollup operator misbehaves, the role operator can not steal your funds.

On the other hand, a bridge between the L1 and a sidechain is very similar to a bridge between two Cosmo zones using IBC. It's what I would refer to as a committee-based bridge, where you have to kind of put your trust in the community, which might be the validators of that chain will operate as a bridge that attest to the state of that chain and to make sure that only correct only blocks that have valid transactions are relayed across the chains and therefore in order for your funds not to be stolen you have to kind of trust this committee.

Zaki care to comment?

There's one thing that matters, which is, is there a plausible way in which there's like one sort of subtle difference, between various kinds of committee-based chains. And the question is, do you have to do business development with the committee in order to interact with the block?

And if you look at committee-based blockchains that are not IBC, like the wormholes of the world, you know you have to go to a bunch of meetings and convince a bunch of validators to connect to your chain. if you want to join the wormhole network for instance, or any of the enormous number of other committee-based bridges.

Right now we live in a world where there's an extremely large number of bridge projects of committee-based bridge projects and a relatively small number of blockchains. So, for any given blockchain, someone will probably be willing to bridge you. But that stops making sense as the number of blockchain starts to grow.

And IBC is very special in the way that it does not require business development, with the committee or the validator set, in order for the two blockchains that speak IBC to connect with each other. And I think that's like one of the more important properties of IBC.

And one thing I would add to that, I'm not inherently against committee-based bridging, I think we need both. There's this potential world where rollups themselves…. You might deploy a sovereign native rollup on Celestia that itself has a committee-based bridge operated by a bridge provider to other chains. And if you think about it, that's that's basically what Ethereum is, if you kind of turn your view upside down.

In the sense that if you look at the view of Ethereum from Polygon, Polygon is technically just a wormhole type bridge for all the rollups on Ethereum. It's very secure wormhole-type bridge because its secured by the entire Ethereum validator set, but it's still a committee-based bridge. So there's definitely a potential for rollups themselves also to have committee based bridges as well as trust minimized bridges. Because ultimately we will have an ecosystem where there's multiple chains that use a mixture of committee-based bridges and trust minimized bridges.

As I said before, the main interesting thing about rollups, is the ease of deploying your own chain that can be secure itself without having to bootstrap your own validator set.

Mustafa to keep you on the mic here, there was a significant amount of inbound on really defining modular. It's a nascent space. There's tons of talk about it, but I know you have some strong opinions here. Can you build on top of sort of your modular thesis? How do you define this budding world?

Well, I guess different people define it differently. I originally used the word module blockchain in a blog post in 2019 to introduce Lazyledger to mean the separation of consensus and execution in a blockchain. Different projects have used it in different ways. And so that's why I think there's some confusion there.

For example, Ava has used it to mean, like modularizing the software stack itself. I mean, in the sense you have like the software stack itself might be modular. Like Tendermint itself is actually modular, from a software perspective, because it separates the state machine and the consensus that you can plug in your own state, you can plug in like your own like state machine using ABCI to Tendermint.

And that's basically how Cosmos SDK and Tendermint interact with each other. But in the context here, modular blockchain generally means separating consensus and execution, and that's effectively how rollups work because they do the execution off-chain but post data on-chain.

I think we have a question here. We'll add this guy as a speaker. Welcome. Care to ask your question?

So, I've been following on and I'm really impressed with everything. I just had a question. So Mustafa said, Polkadot basically can scale to a hundred chains and he didn't really speak on the parathreads that they have, where you can pay block by block. So I was wondering if you could just speak on that and see the difference between Celestia and Polkadot in the sense that even though you have a hundred chains, with the parathreads, you can scale past one hundred chains.

Yeah. So I think that is interesting, but it's still fundamentally different because you're paying for execution. Even if you're paying per block in Polkadot you're paying the relay chain to verify the execution of your chain, or at minimum take an interest in the validity of your chain.

In the Celestia model, you’re only paying the Celestia main chain for data availability but not execution. So you're not bottlenecked by execution effectively because the main chain does not have to execute your stuff, because your chain uses fraud or zk-proofs to prove the execution to other people.

Let me try and explain it in Mustafa's framework, Polkadot is a system in which there is a built in bridge and the bridge assumes the validity of the state transition function is honestly executed by all parathreads and parachains.

So because of the way in which the built-in bridge works in Polkadot. The Polkadot relay chain has to take an interest in the validity of the state transition function for any chain in which it is executing the state transition, because it has this other function of being a bridge and that bridge function requires.

So you can sort of flip Polkadot over and think of Polkadot as a committee-based bridge in which the committee basically says, okay, for this straight transition that we want to bridge you are going to spin up a validator set and check like some number of blocks, whether or not that number of blocks is one or more, which is the para chain model and one is the parathread model in order for the bridge to function. I think sometimes Polkadot almost kind of makes more sense if you think of it primarily as a bridge that has this property, as being like a very interesting committee-based bridge.

Thanks Zaki. We have another person here who'd like to ask a question, Chad, go for it.

Yeah, my question is, do you guys think it's possible to have shared security amongst, non-instant finality chains? Or do you think shared security can only occur amongst instant finality?

I don't see any inherent reason why you can't have shared security between non-fast-finality chains.

Yeah. I mean, you get into this weird coupling that you start to see and rollup centric Ethereum, where you're like okay, like I publish my zero-knowledge proof to a chain then it's going to take 15 minutes to finalize, where the zero knowledge proof chain is sort of waiting for its own state transition function to be finalized.

And you get these like weird couplings between the bridges that happen in these like non-instant finality environments. I think that there's like the design space, everything you can do in like low latency bridging environments in non-instant finality changes is not fully explored.

I have some ideas that I have not, actually like I haven't really written about yet. But there is that. The other question is whether or not the rollup itself could be non-instant finality and I think in the Celestia model I think its very easy for you to imagine a rollup that is not instant finality where there are multiple parties that have the authority to extend the chain and there is some fork choice rule. I know that Mustaf and I have talked about this in the past and in previous private conversations.

Thanks for your question Chad. We did get a specific question around Celestia, how does Celestia empower light clients? Mustafa, please.

So Celestia is built from the ground up with light clients in mind. If it wasn't designed with light clients in mind then we would have released mainnet maybe a year ago. It might seem insignificant, but when you actually think about it, light clients effectively are core to blockchain scalability, because scalability is defined as equal to the throughput of your chain divided by the cost for end users to validate the chain.

And that's the fundamental reason why Ethereum doesn't just want to increase the gas size limit, and the fundamental reason why, kind of, Solana’s strategy is somewhat controversial, because there's very high resource requirements to run a node there.

But no one can actually validate the chain using a laptop, for example, from a users’ perspective it's not different to a web2 database because you can't actually verify the chain and the most important aspect of web3, on of the most important, is that you don't have to rely on trusted third party or middlemen for the state validity of your application.

So it's very easy to scale a chain if you don't care about end user validity. You can just fork Bitcoin or Ethereum and increase the gas size limit. Job done. But rollups are kind of a more long-term approach because rollups try to scale chains and using fraud and zk-proofs, which doesn't increase execution requirements of that chain.

So with Celestia what we've done is there's kind of two components to that light clients. We've implemented data availability sampling light clients which means that any user could verify the entire data availability of the entire Celestia chain without having to download the entire chain. Instead they only have to download a very small piece of every block, and with that they can actually get assurances that are almost equivalent to running a validator node or a full node.

We have someone ready to ask a question? go for a minute.

Hey, thanks for bringing me on. I just wanted to ask Mustafa a little bit about the difference between Optimint, Cevmos, and what the final product would look like for someone trying to start through DAO or chain without having to bootstrap a validator set.

Yeah. So Cevmos is like most of the first products that is being built on top of Celestia, and Cevmos in a nutshell is basically a settlement chain for EVM based rollups. So, what would that mean? That basically means instead of deploying your EVM chain on top of Ethereum you could deploy it on this other settlement layer called Cevmos.

And the reason why you would do that is because we're designing Cevmos from the ground up in such a way that the resources would be priced on the chain would be structured such that rollups are first class citizens in the sense that we we want to discourage people from deploying smart contracts or applications directly on the settlement layer, but pushing them to rollups, which do off-chain execution.

I think that's really important because, Ethereum has this kind of chicken and egg problem, where on the grand scale of things using a rollup on Ethereum does not save you that much fees. I mean, maybe it saves you 75% of the fee if you use optimistic rollups because calldata is still so expensive. But for most people that's not a really compelling reason to use a roll-up, unless they truly care about shared security, when they can just get 1 cent transactions on polygon or avalanche c-chain.

So, you know, a significant chunk of Ethereum users are whales or people that have a lot of ETH that don't mind paying $50 transaction fees. So, I think there's some value in creating a chain that's optimized solely for rollups, not just like arbitrarily, but also in terms of resource pricing.

One thing that we're looking at is making the cost to write and read state too expensive for people to deploy smart contracts directly on the settlement layer so that they will be forced to use rollups. To strongly financially incentivize people to use rollups. And, this is something that the Geth team is trying to avoid. They're trying to do the opposite of this.

There was an EIP, i forgot what it was called, where they were proposing the cost of calldata to be reduced, and the Geth team’s main concern was that it would favor rollups to much and make it too expensive to not use rollups.

So we'll be concluding shortly here. So if anyone has any final questions, please raise your hand. Zaki we did get a question for you, somewhat of a more on the personal side of things. How did Zaki realize that data availability is a fundamental part of blockchain scalability?

So I think one of the people I remember talking most compellingly about data availability a long time ago is Joseph Poon from  the original lightning paper and handshake, and talking about like how the data availability incentives work in Bitcoin why data availability is like sort of the fundamental problem.

I think the other thing that like was, is just when you try to work through committee rotations in a sharded blockchain, and you're like how will this committee rotation fail? And you typically run into this problem of, sharded blockchains are all well and good until a committee rotation fails.

And then basically what happens when one of the shards is not live? And how do you recover? It's kind of interesting and sort of the root problem of trying to figure out how sharded blockchain can be secure. And all of those things sort of kept landing me back at data availability problems, and sort of, looking in the world for solutions that caused me to pay attention to what Mustafa was working on.

There's an actually, there's another problem that is sort of related to data availability that I have a lot of experience with, which is why is there a 21 day unbonding period, on the Cosmos SDK. The sort of default Cosmos unbonding period comes from this notion of wanting to make sure that there is enough time for data on consensus attacks against the IBC light clients, to become available. And if we had some sort of magical data availability solution, of which I've been interested in several different flavors of, you could substantially reduce that on bonding period.

Thank you, Zaki. We are getting this on Twitter. Mustafa, when you mentioned sovereign rollups or sovereign communities, can you please expand on that and tell us more about your vision as it relates to that term?

Yeah, so the way I see it, and this is kind of echoing something that Ethan Bachman said which is that, you know the invention of personal computing allowed individuals to be sovereign. but the invention of blockchains allowed communities to be sovereign. In the sense that for the first time, you can actually have like a shared community space where a group of shared people can kind of can get together and organize, and like independently of the status quo in a sense that you can have a community with self agreed rules that that are enforced without requiring a the court system or the police to enforce those rules.

And you know, this only works in the limited kind of setting specifically with resource allocation in a financial setting. But I think that's very powerful. And in particular, because it's very internet native in the sense that anyone with an internet connection can join a movement or community and decide how resources within that community allocate.

But at the moment, that's just going to be done using DAOs and smart contracts. But I think it would be very powerful if you could actually create your own chain or communities create its own chain that does not necessarily derive its authority from some higher chain. In the sense if you deploy on a roll up your chain is bound by the social consensus of the Ethereum community. But maybe not necessarily, like maybe you want to be diverse within the same period and you don't necessarily agree on the politics of the parent chain (Ethereum).

And don't really have many options to have your own effectively state machine with rules. And unless you just create your own chain with a weak validator set, which isn't great, but Celestia actually allows you to create what's called sovereign rollups. And so you can create a rollup that does not settle to any other chain.

Like it does not for example, like Ethereum rollups are effectively baby chains, they derive their authority from Ethereum. but similar to how you have a github fork that's connected to the master git repo, but what if you disconnect that link from the network?

With a sovereign rollup you can have a rollup that does not necessarily post or have an enshrined bridge to some other master chain, but you can have a completely independent chain. And you can hard fork that rollup if the community decides to without having to ask for permission from the chain.

Okay. So we're at time here. Zaki, we'd like to thank you for just taking the 90 minutes here to be with us. Any parting words for us before we go

I mean, I think that this is where I think, rollups data availability, the modular blockchain hype thesis. It's been a very exciting kind of time now and I think that there's this kind of moment where it kind of all starts to come together.