Cloud native chronicles: Lessons learned from building Cerbos in the open

Published by Alex Olivier on December 04, 2023
Cloud native chronicles: Lessons learned from building Cerbos in the open

Our very own Alex Olivier, Product Lead at Cerbos, has put together a video, which is a must-watch for anyone curious about what it truly means to embrace cloud-native principles in real-world scenarios.

Alex delves into the essence of being cloud-native, drawing from his extensive experience over the past several years. He shares valuable insights gained while developing Cerbos (Policy Decision Point), our open source authorization solution. The journey to build a successful cloud-native project is filled with challenges and learning opportunities, and Alex is here to share them all with you.

Whether you're a developer, a product manager, or just a tech enthusiast, this video is tailored to provide a deep understanding of the cloud-native landscape. It’s not just about the technicalities; it’s about the journey, the challenges, and the triumphs of embracing cloud-native methodologies.

Relevant links

Check out the transcript here:

Good morning, afternoon, evening, delete as applicable wherever you are on the planet. I am Alex Olivier. I am joining today for this CNCF webinar, and we're gonna be talking about cloud native and what lessons learned. I've gone through myself and our team about building a service. In the open source world quickly by way of introduction, I am Xavier.

I am an engineer term product guy started my career at Microsoft, but I've spent the last 10 years working various SAS businesses across martek e commerce supply chain, but connected fitness, but always very much focused on data and infrastructure. So. Working with the teams that are running our big communities deployments and making use of all the great ecosystem projects in the CNCF and a couple of years ago, we started our own project called Cerbos, which is tackling authorization in the open source space.

I'm not going to dwell on Cerbos, but at a high level, you just have an idea of what it is. It's a way of implementing roles and permissions in your application using policy rather than code. It's a scalable, stateless, decoupled approach to authorization, which extracts all the complicated business logic out of your application code base into a standalone service that you run inside of your infrastructure and you define all your business logic as policy, similar to what you're used to with other kind of manifests that you're using, for example, with your cluster.

Really, Cerbos the target is for the end user applications, though it can be used from pretty much anywhere in the stack. That's what I'm going to talk about Cerbos but I'm going to refer to it as kind of a reference for some of the lessons we've learned building this project over the last two and a half years, as an open source project, as well as being a member of the CNCF.

You can find out more of Cerbos itself, Cerbos.dev, find us on GitHub, follow us on X, Twitter if you wish to find out more. So cool. The main crux of this talk is going to be speaking about what I think and what we've kind of experienced and sort of what we've heard around what makes a project cloud native.

And here it is in quotes and the cloud native phrase, you know, what does, what does that mean in reality? Maybe it should be a lowercase c, lowercase n rather than uppercase. These are all things that we pondered a bit about and also kind of some lessons learned looking at it. So I'll start off by saying being a cloud native project doesn't necessarily mean joining the CNCF.

Obviously a great organization, I wouldn't be here speaking to you today if it weren't for it. It does amazing work for the community and the ecosystem but it's a very kind of busy one. And you know, if you want to launch a project and say it's, you know, cloud native, certainly one way of moving in the right direction is sort of joining the CNCF as a member you know, which we did at Cerbos.

So not just saying you're a CNCF project doesn't necessarily mean, uh, you're your actual project implementation when it comes to reality is kind of kind of native. But for Cerbos, we did join, we are a CNCF member in the security space. And you can see our little logo amongst lots of worthy competitors in there as well.

All trying to solve ultimately developer pains and developer problems. So if it's not just becoming a CNCF member and saying you're done, you know, what really does it mean to be a cloud native project? So, the next thing you say is, OK, fine, we open sourced the project, we publish the container, great, now anyone can go and grab it, they just run their container runtime of choice, so build it, publish the container, job done, right, we're cloud native, you can run a container anywhere, sorted.

Don't quite believe it's that also. There's a lot more to it than just publishing your app and say, great, done. Open source, obviously, is almost like a bit of a hard requirement these days. Open source is the future. It is how many in a business and an application software really gets started. This is what we've done with Servos.

We released this two and a half years ago. It's GitHub stars now. And, but it's out there. It's open source. It's Apache 2 licensed. So if it's not joining the CNCF, if it's not just publishing your container, And if it's not just OpenSource in your repo, you know, what, what practical steps or things do you need to do to kind of get to a place where you fit into sort of that ecosystem and what does that cloud native label really mean sort of in anger and in production and in reality when, when you're actually have users out there.

So the way I kind of think about it is to be cloud native. You take your project and it needs to fit into the wider ecosystem. So with Cerbos we are, it is an authorization service. It's in the security space, but it plays nice. It's connected in and you can leverage the rest of the cloud native ecosystem and those tools and projects to essentially Come up with a complete solution, because you're going to need a CI solution.

You're going to need a logging solution. You're going to need a metrics collection system. You're going to need a tracing system. And, you know, making sure that your project fits into the wider ecosystem and, and integrates into the wider ecosystem is really what I believe is what differentiates a good project to a great project when it comes to being inside of this, the cloud native ecosystem.

So I mentioned open source before it's not just a matter of publishing your code onto GitHub and say, right, job done. You know, there's more to that. So making sure you have things like a reasonable license. You know, Apache 2, AGL, you know, there's many to kind of pick from. But making sure that your, your license for your repository and your code base is one that is going to not deter people.

You know, we've seen some of the recent moves by some big projects out there to maybe change their licenses later on, which has certain consequences downstream and even impacts how people and businesses can actually use your project. So making sure you have a reasonable license in place is the first one.

Code quality, and, and not necessarily just like things like test coverage, etc., but making sure the code is grokkable. To turn a phrase. So, making sure it's clean, understand, well commented, well structured. Make sure it's at the right level of abstraction to really kind of understand. To allow anyone kind of jump in and understand how a particular project works and functions.

You know, one thing is once people can understand this code is like, how do you get contributors? So with Servos, we've actually created a load of issues with this tag of good first issue. So if people are interested to kind of contribute or support the project, you know, even if it's something as minor as doing a typo change in a documentation, you know, that really should be encouraged.

So making sure you're highlighting those good first issues. Having a clear contributor policy. So we've actually built a lot of automation in the CI pipeline to catch smaller things earlier on, and make it a much more easier experience to contribute to a project, and kind of be confident that your change is ready to go, and, you know, hopefully get reviewed and accepted by the community.

And then if you're in an amazing position, which thankfully we are with Vos, is you do get contributions from your community. Like make sure you shout out to them. You know, we publish our release notes every time we cut a release. And in there, every single user that's contributed to our codebase and, and to the project is listed and highlighted and their, their changes is highlighted, even if it was as if it was built by someone that actually is in-house.

Or from the wider community is everyone's on the level playing field, and you know, it really encourages that community environment. So from a code perspective, from a repo, from an organization, from an open source point of view getting these things in line is kind of is a good foundation to then kind of build up and kind of build into the wider cloud needed ecosystem

Explaining how it works. Now this is one that's close to my heart. So I've come from kind of a background of having to wrap my red head around various, you know, repos and projects and systems and spend hours going through the code base trying to understand how the hell this particular component would plug into my ecosystem.

You know, being able to contextualize how your particular project works with other common components in an architecture. Architecture stack. So, you know, where does it sit in like the ultimately the request chain and where you have some end user integrated with the application? Where does your components sit?

So in, for example, with metrics, you know, this is a component that sits alongside your services and it scrapes say Prometheus metrics from each of the microservices, puts them into a database and then you have dashboarding and those kind of tools on top of it. So it's a very clear way where that component fits in and the same with whatever your project does So if it's a database, you know show you at what level it should plug in if it's a gateway What level does it kind of plug in?

You know, we've done with servos We've actually shown like the full end to end request flow. So actually really looking at end user use cases They do some action at what point Does authorization and does the servos project? Hook in get into Is interacted with by your system by your application? So really contextualizing where things go So now you now people have a very quick idea of like, right?

I understand what this project does. I understand where it fits into the stack. How the heck do I actually use it? So documentation Obviously don't need to do that. Everyone kind of knows the importance of docs, you know, this will make or break a project we did a lot of work in the early days of servos of actually integrating the documentation Where possible into the codebase?

So there's actually a whole chunks of the servos Documentation at docs. serverless. dev, which is code generated from the Go code. So serverless is written in Go. There's comments, instructs, and such inside of the code base, which are extracted out during the documentation building process, and then pushed into certain pages onto the, onto the doc site.

And so that way we can never, we can never have a drift between. The code base and application the code base and the, and the docs, because the docs are generated from the code. If we go and change a configuration variable or add a new, new field and such, we update that and the code base, obviously, we know during the, through the doc building pipeline, that field will show up and be available to users to, you know set values as they wish.

So it's a really key one. Having a very solid set of release notes. So whenever we publish a new version of Cerbos, Not only do we like highlight, which commits and such have been included in that release, but also including references and quick starts and pointers to how to use a particular change or migration paths, these kinds of things.

So in the case where you may have a. A migration to be done, even if it is making sure it's a sensible migration and making sure that you have a, a good roll forward strategy and a rollback strategy. If something doesn't quite work, you know, how can a user upgrade a typical configuration to using the new configuration and then making sure when you do push a change to help creation workflows and does require migration here, what is the deprecation window?

So being able to run the configuration a little while, Until you're happy with it things are stable, you release a few versions, and then what is the, deprecation story after that? So, you know, with Cerbos, we're still in the less than 0. 1 less than version 1 stage. But even within the, the minor versions, we are keeping capabilities or, or changes around for a few versions after their initial release before deprecating an older one.

And it just gives users That confidence that, you know, this, this solution is something that is set up to grow and set up to scale and is sensible and can be relied upon when it comes to upgrades and, and, and updates being language agnostic. So anyone that's kind of using a lot of the CNCF projects will be kind of aware, you know, these are generally services that you're running inside of your stack and.

When you actually look at the end user applications that are making use of different projects, you know, these are going to be in a myriad of different application languages So there's a bit of a bias before Towards go with some of the CSCF projects, but as you look at the majority of maybe end user applications you see a lot of no typescript out there and you see php.

net You know, Python obviously is the big one now with machine learning world, etc. So really look at how can you make sure that your project is agnostic to any particular architecture or any particular framework. So, with Cerbos ultimately we ship a container. That container exposes a gRPC or REST interface and then the idea is any part of the application stack can then call into that service and Interact with it and get kind of authorization decisions Without it being coupled to any particular application.

So we have a very simple basic simple API Primarily is exposed as a gRPC interface, but recognizing that not all application frameworks are let's say as friendly to grpc or you know, it takes a bit more of an overhead to understand protobufs, etc We've also put a rest wrap around it So straight off http rest style api on top which ultimately sits on top of the grpc service And then also to make the onboarding experience as smooth as possible, we have SDKs.

So we've done a lot of work to code gen SDKs from the protobuf definitions for the gRPC interfaces. And then as a user of Cerbos projects inside of your application architecture, at the point where you want to check permissions, you don't have to worry about constructing gRPC channels, you don't have to worry about securing, you don't have to worry about.

And formatting the calls, etc. You're getting a type safe SDK in your particular language and you just point it to which internal service or service is running typically inside of your cluster. So, be a service and make sure you provide sensible SDKs on top to allow users to plug in as simply as possible.

Being infrastructure agnostic is one that takes a bit of thought in terms of how to, how to design and set up. And obviously we're now in the world of containers. But what we've seen kind of from the usage of the service project. It's not everyone is just using docket containers. So for the service project, we distribute the container.

We distribute binaries. So through our pipeline, we generate binaries for the OSs as well. We have a number of users that actually just essentially install the Debian package onto a node when they're running service. They're not in Kubernetes or anything like that. So how can And we fit into that environment and there's environments where people want to check permissions that you can't even write a runtime So we actually distribute a web assembly module as an option also And so really understanding, you know, where is your project going to be used and not use, you know, use your community, use what a feedback you can get, speak to the contributors, et cetera, and really understand, you know, where, where is this application, where is the service going to be run, you know, containers is a pretty good bet this day, these days, but I was even surprised that we did have users asking for binaries and wasm modules and such.

So we kind of did the work and put that in. Out there and and there's usage of these other methods of running, which maybe you wouldn't consider if you're just thinking like Kubernetes. So really understand the infrastructure that your end users are using. When it comes to the container itself, you know, be sensible about base image, so using something like Scratch or DistroList as the base images obviously it, it makes it smaller, it also removes a whole load of attack vectors, um, when it comes to knocking down your, your containers and security side of things but one thing to be wary of is actually use it, there are users when they're trying to test and evaluate your project, Sometimes want to be able to shell into the container and like run some test commands and such against it.

So really think through like, what is the debugging story? If users is running your service in a container, but there's nothing else in that container besides your service, how do I get in? Like maybe try executing with a different flag and those sort of things. Something to kind of worry of it being since we're at health check endpoints.

So communities obviously is big one for this, but maybe users about using things like Docker compose or they're running directly on, bare metal with a load balancer doing health checks, etc. You know having an HTTP endpoint for a health check is the fairly common practice one, but we have users that are running servers in like serverless environments where the infrastructure will just exec a command inside of the container.

So we've created the serverless health check endpoint. Command. So you literally exec the serverless CLI tool with a health check input, and then evaluates and returns some sort of returns, a status response. So it's not always going to be an HTTP check. So what is the story for your project when it comes to health checking?

MTLS support. So. So very well saying that we ship this nice microservice, you're writing your stack, et cetera, but these are going to be running environments where there's going to be restrictions and rules around locking down that environment and making sure. Everything is secure and inter service as a communication is one of those aspects.

So it's thinking about, okay, the API you expose out, how can you do things like mutual TLS? And this has just been a hard enterprise requirement for some of the serverless users to date. So we've added support where you can mount your own certs and then the server inside of the serverless. Server uses those mounted certificates to do TLS and then really thinking through like what's the certificate location story, those sort of things you just want users to destroy your pods if they roll the certs.

But, you know, having these capabilities in place and, you know, possibly maybe consider a bit more of an advanced requirement. But if if it does come to it, this could be a make or break for someone actually deploying your particular project. When it comes to deployments, I kind of mentioned this already, you know, Kubernetes obviously is a big one, but you know, how can you make that easier?

So for serverless where you want to have a service running inside of your cluster, We've done things like publish Helm charts and there's variants of those for having a service or a daemon set or a sidecar, et cetera. So how do you inject the container into the right point in the stack? Look, be, you know, looking at other environments.

So things like cloud formation, templates, ECF definitions cloud on canator definitions. How, if your service is capable of running kind of ephemerally and statelessly, how do you run it inside of Lambda? Or these other kind of serverless function type environments? You know, what about edge functions?

You know what about environments where you just want to run a Web, a WebAssembly module and not have to worry about actually running a full fat service? You know, how do you support those? And there's all kind of Really think about again where your where your users are and what environments they're running in and how you can best support as many runtime environments That makes sense for your particular project one thing I will say is around hosted really have a think through around like Is your project something that?

We would want to be a managed service or a hosted service. In the case of authorization, we have been staying away from a managed decision point. Because architecturally you don't want to be going out over the internet to some managed hosted version of a service to check authorization permissions.

Because this is going to be in the blocking path of every request. And the moment you're going out of your network, out of your cluster, off your node even. Over the internet to do a version check that's going to add tens of milliseconds of latency, which is going to add tens of milliseconds, tens of milliseconds of latency to every single API call that's coming into your system, which isn't a good design.

So some projects makes complete sense. A hosted managed version is certainly a way to monetize an open source project, but really think through, is that the right way you as someone that would maybe using your project would actually want to architect and design an application layer.

So after your official, you don't have storage. So if your project needs to persist data, how do you make that pluggable? So in the case of service, the way else he works is you're loading an authorization policy. These are young or definition files, and that define who can do what under which circumstance, under which conditions, and there's We want to make sure that we had a way of having a sort of a agnostic pluggable back end for different storage mechanisms because we knew our users are running all sorts of different environments and they had different controls and constraints around how they can bring a configuration stored data into the runtimes.

So, you know, storing on disk either. By mounting a volume or a config map is kind of an obvious one but allowing you to pass in credentials to like a cloud storage blob, so S3, GCS, Azure Blob Store, these kind of things, and how can you use that as storage backend git repos. So, GitOps is very much here to stay, I think it's safe to say at this point.

How can you actually use a Git repo as a storage backend and what is the whole GitOps story around it? Allowing users to basically provide credentials to a Git repo, hosted somewhere, and then actually have your runtime pull down. Pull down on some regular cadence, the configuration from there enables you to actually have a run long running service, but dynamically change the configurations, or in this case, with authorization policies for Cerbos, on a regular basis and allow users to basically set up those polling and reload and caching intervals, etc.

When you think about your getting started, when you think about more dynamic use cases, does there need to be a database? Okay, great. What database? Well, SQLite, PostgreSQL, MySQL, MS SQL Server, etc. There's a plethora out there. We're now at a good point where especially in Go, there's some great abstractions around it.

So you can actually make this a pluggable backend. So one environment might be using PostgreSQL, another one might be using MySQL. SQL lite is now gaining more and more popularity. At this day and age. So like making sure your application storage configuration is pluggable to different database backends.

And then it's just a configuration flag to set up what that backend is, is another approach. And then really, if you really think of like, what's the least effort involved for a user to try out your project, you don't want to have to worry about spinning up storage or backend or provisioning cetera.

What about providing a way of doing like in memory of storage? So for serverless, if you start it up without finding any configuration flags, you literally just start the server, it will read all, it will read policies in from disk, or you can actually downrightly create policies on the fly, and then they're just stored in memory, and they're ephemeral in that case, but it's a really quick way of getting up and running without having to set up a load of infrastructure or any other auxiliary services.

So again, kind of being agnostic to your particular storage layer. Great. So the next layer down, configuration. So mounting configuration is a bit harder than you'd think. With Cerbos, we actually went down the route of making this extremely pluggable. So you can provide a configuration file directly.

Again, it's just another YAML file. When you start up the server command, you point it to where that file is. Job done. But then be able to override things around particular flags through the CLI flags command. Or using environment variables or allowing environment variables to be interpolated inside of a configuration file.

It is kind of another important one. So when you're deploying your, your project in different environments, how do you provide the environment specific context? You may want to have a base configuration file, but override it using particular values from the environment, for example. And then secret handling, you know, there's, there's different ways of approaching this.

Kubernetes obviously has a number of different ways of doing it, but your secrets might be coming from a secret managing, a secret manager tool or a secret store type tool. So how do you kind of inject those into the container? Do you use environment variables? Do you support interpolating those into the config?

Maybe you just want to override it on the CLI. Really have a think through like what are the different configuration flags, what sort of static, what would you maybe want to have dynamically change based on environment and what needs to be injected at runtime from a secret store and think about how that goes in, you know, with serverless, we kind of found out quite early on that getting users to getting a configuration file into the relevant environment where serverless is running.

Actually wasn't as simple as we thought for a lot of people. You may not have the ability to post your binary and a config file to an environment. So how can you basically just override things via the CLI or environment flags is a big one. So really think through what is your configuration requirement, how do you make it environment specific, and how do you provide overrides and sensible defaults where needed.

Strong API versioning. In an ideal world, you have a super successful project, which I'm sure you're all working on. But how do you make sure you don't break anything down the line? So, you know, people will be building whole application architectures or further products on top of your projects.

So how do you make sure you don't break them in future changes? So using it something with a strong api contract Cerbos, like I mentioned we use grpc And with the rest layer on top so we have protobuf that define absolutely everything Inside of Cerbos not just the external api, but all the internal definition types as well But equally you can go and use something like graphql which gives you that strong schema and and you can version that etc and and You know, going and if you go down to the next level and actually how do you strongly version by using things like a version number in the URL or a header or such, or even in the path to really allow users to pin themselves to a particular version of an API is kind of crucial.

And, and, you know, going back to the documentation point when you do make a change, what's that migration story goes with it and then basically ensuring that you have strong validation in the service and at the SDK level, so it's all very well having. Strong validation kind of at your API layer, but if your SDK doesn't enforce it, it's actually going to be a quite a broken developer experience because your SDK, you've put in some values, et cetera, all good, but then you're, the SDK calls the actual your project running the backend somewhere and the request gets rejected, not good.

So going back to actually generating SDKs from the service definitions and all the protobufs or whatever you're using for your, your API layer. Ensures that it's going to be a smooth experience and making sure you have that strong validation at both layers of the stack. And as I kind of mentioned, Using the strong definitions, not just the external interfaces, but even the internal versions as well.

And these are predominantly going to be open source projects. You want people to extend them. You want people to contribute and evolve and build amazing new products on top of your projects. Having strong, strong definitions of even the internal data types which can then be reused and extended is, is a key one.

To make sure that everything kind of keeps working as you'd expect. So when it comes to logging so you've got, you've got your app deployed, you've got the container running, you've got the storage sorted, you've got your parsing configuration, etc. Now the actual, the actual service is running, you know, what's the logging story?

So, look, configuration, configuring the log levels is very, kind of an obvious one, everyone's sort of used to. But also, splitting out the logs between Different types. So in the case of servos, there's two types of logs We have the request logs, which is like the raw application logs http in and out jeopardy in and out And we have the audit logs, which is the actual output the decisions which in case authorization is the bit you want for compliance and such So actually we have support for multiple Output formats for each of those so you can Root each of the log types to different locations and then you may have downstream log collectors doing different things And we use pluggable syncs for this so standard error standard out are the obvious ones.

Which Are great and if you In an environment where you got a log collector running fluentd one of these type of solutions amazing But you can also configure it to just write out a specific file And that's where you can actually start splitting out the logs into different files based on what they are or levels or source or module, etc but right also having different syncs for things like kafka.

So Cerbos now has a community contributed kafka sync for its logs where you can provide the broker And it will write directly to the particular topic you provide Then really thinking like, right, my application produces logs, it's writing out to certain places. But really where they're going, you know in some cases might be just dumped a bucket and be done with it But think about end users who are going to be pushing these logs into either commercial tools Datadog elastic etc or into things like loci or these other again Projects cncf projects and open source projects that do things like logging.

So really understanding where it goes And the final point around logging if you're working on your project if someone's using your project locally When you want to kind of understand what's going on, you want a bit more human readable version of your logs, but then when it's in production or an environment that's fully kind of spun up, you probably want a small structure logging using something like Jason lines.

So again, really even detecting TTY of terminal connected and those sort of things to really enable users to see different versions of the logs based on whether it's a human looking at it or whether it's some localization system looking at it and using the structure logging where it makes sense.

Metrics I think it goes out saying Prometheus and Grafana dashboards are kind of the way in this ecosystem and the cloud native world for doing things. So Cerbos, we obviously the project. And produces Prometheus metrics, but we also do things like publish dashboards. So inside of the server's repo, we have a Grafana.

Dashboard defined, you can pull that into an instance, point it to your Prometheus instance that has the serverless metrics in it, and it lights up straight away so you don't have to go through the manual process of building your own dashboards. We kind of provide an out of the box one to get you up and running.

And it goes beyond that, not just providing the dashboard, but if you're using something like Alert Manager, you know, what are the things to look out for? You know, guidance on what metrics to watch when in production, which ones you can ignore in the scenarios where maybe something's not working as expected.

initial pathfinding to which metrics to go and check out first. So just spitting out printed metrics is not really enough. You need to go the next level down, provide the dashboards, provide the manager rules, alert manager rules, provide the insights of like, here are the things you really need to watch when you go into a production environment.

And tracing. So it's all great. You got the logs, you got the metrics. Now you want to really understand your application performance. OpenTelemetry support, OpenTelemetry as a whole project and that whole ecosystem is amazing. So doing that little bit of extra work to firstly not only expose spans from your application and your project, But also making sure that all your SDKs pass the relevant tracing headers through as well because It's all very well providing insights and instrumentation into what your particular project does But really from a person building an application you want to understand a much wider Context and making sure that your your headers down into the, the project into your particular project and then back up so you can get the full request spans and tracing ability and all the benefits that come with it and the telemetry.

So this is one more, more of the controversial topics and definitely hereby dragons, as it says. So there's a balance to be made here. As someone building a project, you really want to have an idea of how users are using your particular project, what's being used, under what scenarios, in order for you to basically prioritize new features, new capabilities, and understand different use cases of people using your project.

But the flip side, particularly when this is infrastructure running in environments, you want to, in most cases, lock it down and make sure that there's nothing leaking that you didn't want or such. So with Cerbos we took a kind of an approach where there is telemetry, but every single time you start the Cerbos instance up.

We give you a log message that tells you either it's enabled or disabled. We tell you how to enable or disable it right there in the initial log. And then it points you to our website where we list out not only what's being collected, but also how we collect it and what the data structure is. So we link off to the protobuf definition of our telemetry object, so you can actually see what the fields are.

And we also kind of called out how we're actually doing the data collection. So in this case, we use RadarStack. As a ER managed service that collects the, is the telemetry collection endpoint. And we also called out the documentation who has access to it. In this case it's zennor, which is the company behind CBOs.

We as kind of employees the company can, can see that, see that data, but we're not hiding anything like the, it's completely. viSible in the code base of what's being sent, what's being collected, the structure of it, where it's being sent, etc. And we also make it extremely clear at start up time how to enable or disable the project telemetry inside of your particular instance of the application.

So, I believe telemetry is okay as long as you're clear with your users. What's being collected and also how they can delay disable it very, very easily. You shouldn't be trying to hide the fact you collect telemetry because that's just a bad place to be. And just to kind of wrap up a final point.

This is a bit of a, more of a marketing, more on a technical sort of point side of things. You want people to use your project. You wanted, you wanted to get as much traction as possible. And believe it or not, it is actually possible to kind of do sort of success stories or case studies of users of a open source project.

So you've probably all seen like quotes from people on, on from users of projects and like the GitHub repo, et cetera. We've gone, gone one step further. We've actually spoken to some of our largest users. Envoy, utilitywarehouse, blockchain. com, who are using serverless today. And we've actually managed to do Full on case studies and success stories with them.

So even though it's an open source project, they're not paying us anything they've just taken it off and GitHub and run it all themselves There's still kind of an interesting story to tell to help fellow architects, developers, you know, decision makers around which solutions to use regardless of their commercial or open source.

There's still a story to be sold and sort of insights that can be provided around how, how business or project adopted some solution, how long it took, the savings involved. Any particular insights into the best practice, et cetera, pull all these things out and you'd be surprised that you can actually get case studies even for an open source project, but really make sure that the insights in those case studies are really, really focusing on the developer experience because you're building an open source project in the cloud native world.

You know, we are all also be developers here. We are trying to get other developers to kind of save save some time and remove a few headaches and providing solutions to those and But you can do that in a way that is still kind of a more of a classical case study type approach But really focusing on the developer side of things So just to wrap up For me, at least for you, for a project to be cloud native, it really is your project mixed in with the rest of the ecosystem.

So playing nice, having the integrations, having the best practices in place, having all the right hooks into the other parts of the cloud native ecosystem available, leveraging other parts of the ecosystem for metrics and monitoring and telemetry. You know, being storage agnostic, all these concepts ultimately is what I think makes a cloud native project successful, but really it's just play nice with others.

You know, we're all in the same game. We're all trying to build amazing technology and build amazing systems and delivering great products to our end users. And why not leverage what's out there? You know, you don't want to have to reinvent the wheel every single time if there's another great project doing the thing you need.

Integrate with it. Don't necessarily waste time building it all yourself and ultimately play nice with others So, thank you very much if you want to find out more about Cerbos if you want to go look at our repo, etc Go and head over to Cerbos.dev. You can find everything there And I will speak to you all online.

Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team