diff --git a/shipit/ship-it-134.md b/shipit/ship-it-134.md index 415f7778..a8a394f9 100644 --- a/shipit/ship-it-134.md +++ b/shipit/ship-it-134.md @@ -70,7 +70,7 @@ And so we do have people that we're paying to do stuff that we don't have time t **Justin Garisson:** And in the pre-show we did determine that being a PM doesn't mean you work at night... It's something else beyond that... But tell us a little about your background. How did you end up as a principal PM at Honeycomb? And what software are you responsible for? -**Phillip Carter:** \[00:07:52.16\] Yeah, so I started my career working at Microsoft outside of college, and I joined the .NET team. And that was pretty fortuitous, because this was around 2015, when we had the very first, very crappy preview of cross-platform .NET that we were building, which at the time was called .NET Core... And this very big goal of like "Hey, we want .NET to be moved from like this Windows-focused development platform", which is fine, but like clearly not the future of software, "to be like inherently cross-platform, inherently native when you deploy on Linux, and be able to be so that you could run on Linux and deploy on Windows, if you wanted to. You'd be a weirdo for doing that, but you could... But like you could build your apps on Windows, deploy on Linux, build your apps on \[unintelligible 00:08:35.22\] map, deploy on Linux, build your apps on Linux, deploy on Linux... Do whatever the heck you want. Work well in containers..." All the things that we associate with modern software development these days - these were all just like big bullet pointed lists of like "We need to add a green checkbox to every single one of these. And we're going to do it in like five years." So we did. +**Phillip Carter:** \[07:52\] Yeah, so I started my career working at Microsoft outside of college, and I joined the .NET team. And that was pretty fortuitous, because this was around 2015, when we had the very first, very crappy preview of cross-platform .NET that we were building, which at the time was called .NET Core... And this very big goal of like "Hey, we want .NET to be moved from like this Windows-focused development platform", which is fine, but like clearly not the future of software, "to be like inherently cross-platform, inherently native when you deploy on Linux, and be able to be so that you could run on Linux and deploy on Windows, if you wanted to. You'd be a weirdo for doing that, but you could... But like you could build your apps on Windows, deploy on Linux, build your apps on \[unintelligible 00:08:35.22\] map, deploy on Linux, build your apps on Linux, deploy on Linux... Do whatever the heck you want. Work well in containers..." All the things that we associate with modern software development these days - these were all just like big bullet pointed lists of like "We need to add a green checkbox to every single one of these. And we're going to do it in like five years." So we did. **Justin Garisson:** Is it not called .NET Core anymore? @@ -98,7 +98,7 @@ So I worked on our languages, specifically on the F\# language and the C\# compi **Justin Garisson:** Intuition is way better than data. If you're like data-driven, you're like "No, no, I know how this should work, because I do it." It's so much more powerful than saying "Here's why, and I will show you why." -**Autumn Nash:** \[00:12:03.05\] I'm not gonna lie, but I'm so excited to write code for fun and for an experiment, and not in production... +**Autumn Nash:** \[12:03\] I'm not gonna lie, but I'm so excited to write code for fun and for an experiment, and not in production... **Phillip Carter:** Nice. @@ -128,7 +128,7 @@ And since I've switched over to Honeycomb and into the startup world, there's th **Autumn Nash:** It is. But when people aren't honest and they'll just say all the good things, and they'll just like blow smoke up at you, you won't trust them and their opinion on anything technical. That's how you lose all your credibility. -**Break**: \[00:15:16.28\] +**Break**: \[15:16\] **Justin Garisson:** Alright, Phillip, here's one honest thing that I have wanted to know for a very long time. @@ -140,7 +140,7 @@ And since I've switched over to Honeycomb and into the startup world, there's th Obviously, lots of similarities. There's elements of systems programming going on here, where there's good code written in both places... But very often, when you would try to host a monoservice somewhere, it would just fall over, and you'd be like "Oh, is this a joke?' It's like "No, it's not, actually. It was just made for something completely different." So we have this kind of complex relationship - this was before the acquisition of Xamarin - when we were building our own thing and targeting the backend with ASP.NET apps, and saying, "Okay, we want to make this -- a Go developer should be able to pick up our stack and be like "Oh, great. This has the same perf that I care about." That was sort of like the target we were going for. -\[00:20:04.25\] But on the Xamarin client side, we're like "Okay, this is just a fundamentally different runtime, and actually even different set of libraries." And then we worked with them to start unifying some of the library layer. Because there's just all the different utilities in the .NET standard library that - it really doesn't matter where it runs... But they had their own version of this, and we had our own version of this, and we're like "Okay, this is--" Regardless of what degree we work together on, we all agree this should just be a standard that we all consume. And who owns the code? We'll figure that out. It's an open source project all around anyways, and so there's actually a lot of collaboration on that side of things... And then eventually, strategically, I think they were probably aiming for this at some point. We acquired Xamarin, at which point we were able to unify significantly more... And then we actually were able to go on a very big technical project where we unified the runtimes entirely, and were able to actually do that quite successfully, to the point where it's actually the same singular runtime that can run on a client device, or on the backend. And they have excellent performance characteristics, and it's actually - different parts of the runtime will activate depending on the deployment environment, and the garbage collector has 5 or 6 different modes that it can operate under... And it'll have different behaviors depending on the kind of thing that it's packaged into, and stuff. And this is all something that was architected and designed for. It's pretty wild. +\[20:04\] But on the Xamarin client side, we're like "Okay, this is just a fundamentally different runtime, and actually even different set of libraries." And then we worked with them to start unifying some of the library layer. Because there's just all the different utilities in the .NET standard library that - it really doesn't matter where it runs... But they had their own version of this, and we had our own version of this, and we're like "Okay, this is--" Regardless of what degree we work together on, we all agree this should just be a standard that we all consume. And who owns the code? We'll figure that out. It's an open source project all around anyways, and so there's actually a lot of collaboration on that side of things... And then eventually, strategically, I think they were probably aiming for this at some point. We acquired Xamarin, at which point we were able to unify significantly more... And then we actually were able to go on a very big technical project where we unified the runtimes entirely, and were able to actually do that quite successfully, to the point where it's actually the same singular runtime that can run on a client device, or on the backend. And they have excellent performance characteristics, and it's actually - different parts of the runtime will activate depending on the deployment environment, and the garbage collector has 5 or 6 different modes that it can operate under... And it'll have different behaviors depending on the kind of thing that it's packaged into, and stuff. And this is all something that was architected and designed for. It's pretty wild. **Justin Garisson:** Do you know if other languages do that? Is that something that's common? @@ -158,7 +158,7 @@ Obviously, lots of similarities. There's elements of systems programming going o **Justin Garisson:** ...because people are still running it from 2005. Like "I'm still on Java 9", or something like that. But I remember at Amazon they had this whole like shim layer that they built in. And I'm pretty sure there was a blog post about this, where they were like "Well, we're not going to rewrite all our Java, but we are going to compile onto the new virtual environment \[unintelligible 00:23:58.29\] the compiled VM runs." And they saved millions and millions of dollars, because they were like "We've just got better performance." -**Autumn Nash:** \[00:24:07.28\] The performance literally paid for like millions of dollars. +**Autumn Nash:** \[24:07\] The performance literally paid for like millions of dollars. **Justin Garisson:** All the engineering time, plus more. Yeah. It was crazy. Just like "We just put a shim. We didn't rewrite the code. We recompiled it, put the shim..." @@ -184,7 +184,7 @@ There's clearly a lot of interesting stuff going on in developer tools outside o So I chatted with them, kind of did the whole interview loop... I really liked the people, and was kind of sold on some of the initial vision. Because Honeycomb - it came out before Open Telemetry, but it was still very, very early stages when Open Telemetry was formed. And Honeycomb as an observability tool is fundamentally different from the rest of the market. It has this ambitious goal of "We really want to help developers reshape fundamentally the way that they introspect their systems", and this gets from super-high level, like how you do an analysis workflow... Instead of saying "Oh, I'm going to look at my logs, and then I'm going to go look at traces that correspond to that, and see if I can find a match..." And like "Oh, I have this metric that says it's up. Alright, cool. Are there any logs that relate to this time range, or something?" Kind of a broken debugging flow compared to what we do. And that kind of attracted me. -\[00:28:08.25\] But then at the same time, they're like "Well, there's also this whole other thing with Open Telemetry", where regardless of our product shape or what we think people should be doing best, there's this open standard that's evolving, that is ambitious enough that people -- it captures the majority of what people care about. And we have some pretty big customers, who are like - them continuing with Honeycomb is pretty contingent on us being quite deeply involved in the project. +\[28:08\] But then at the same time, they're like "Well, there's also this whole other thing with Open Telemetry", where regardless of our product shape or what we think people should be doing best, there's this open standard that's evolving, that is ambitious enough that people -- it captures the majority of what people care about. And we have some pretty big customers, who are like - them continuing with Honeycomb is pretty contingent on us being quite deeply involved in the project. So one of our larger customers, quite literally, in clear contract terms, was like "You need to be significantly involved here, because we're taking a bet on standardizing on OTel, and we're taking a bet on using Honeycomb across all of our teams. And I do not want one of those pillars to falter in some way. So I trust that you care about your business, but do you really care about OTel? Figure it out." That's what the role was scoped to, was "Okay, we're going to take a bet on OTel. Great. We did it. Mission accomplished! But what is that bet? What are we doing, though? Where should we point our time, and our limited number of engineers we have? Should we hire for this? For how many? What are the most important things to invest in? What are the major problems that people have?" All the big stuff. @@ -196,17 +196,17 @@ And being a startup, I kind of had my hands in all kinds of different places, bu **Phillip Carter:** So in fall of 2022, or late summer of 2022, I'd sufficiently been at Honeycomb long enough that I kind of got the whole breadth of what does the customer journey look like, what are people struggling with when they onboard, versus when they're on day 500 of using the product? And what are their struggles when they're trying to onboard other teams in their organization? ...because this one group buys Honeycomb and they have an awesome time, but that doesn't necessarily mean that this other engineering team that they work with is also going to have a great time, and their challenges might be slightly different... And I came to the conclusion that our product has all of these things that people want to do, where the answer lies on some probabilistic distribution. -\[00:32:06.19\] A very concrete example that ultimately turned in one of the features that I helped build was people come in saying "Hey, I want to query something in this way. I care about this information." Well, there's possibly hundreds of queries that could technically work for what you're trying to do, and there's no single one that's guaranteed to be the right one. If you say slow requests - okay, well, there is a way to technically measure that... But then you get into, okay, we have all these different aggregators you care about. An average. You care about a P80, a P90, a P95... Do you not actually care about any kind of aggregation? You just want to see a count of some kind? Do you care about like a max, and you want to see the maximums? And then when you say "Okay, slow requests, but with respect to what? Is it with respect to a particular call to an HTTP route, or a call to a database, if you have multiple databases?" +\[32:06\] A very concrete example that ultimately turned in one of the features that I helped build was people come in saying "Hey, I want to query something in this way. I care about this information." Well, there's possibly hundreds of queries that could technically work for what you're trying to do, and there's no single one that's guaranteed to be the right one. If you say slow requests - okay, well, there is a way to technically measure that... But then you get into, okay, we have all these different aggregators you care about. An average. You care about a P80, a P90, a P95... Do you not actually care about any kind of aggregation? You just want to see a count of some kind? Do you care about like a max, and you want to see the maximums? And then when you say "Okay, slow requests, but with respect to what? Is it with respect to a particular call to an HTTP route, or a call to a database, if you have multiple databases?" There's so many different ways to potentially answer this in terms of a query, and nothing in our product was like "Hey, here's how you do that." It just all assumed that you knew how to shape what you cared about in the right form and the right kind of query to do that. And this is just something that -- I mean, frankly, it's still a problem with Honeycomb. And the natural language querying thing that we built in early 2023 is just a step in the direction of helping people there. But I kind of wrote this document that was like "Hey, there's all these areas in the product where the solution lies on some probabilistic distribution, and there are parts of that distribution that are likely to be more useful than other parts." And that is squarely the domain of machine learning. So we should investigate, we should explore, we should experiment, we should try stuff, ship to learn, see what happens, \*bleep\* work around and find out... Or sorry, fork around and find out. **Autumn Nash:** We've got to keep Phillip forever. He has to come back. He is our people. -**Break**: \[00:33:55.28\] +**Break**: \[33:55\] -**Autumn Nash:** So did Honeycomb create OpenTelemetry, or was it like an open source project that they ended up adapting? Or did they start it as an open source product? Was it started as an actual product for Honeycomb that wasn't open source and then it was open-sourced? How did all that come about? +**Autumn Nash:** So did Honeycomb create OpenTelemetry, or was it like an open source project that they ended up adapting? Or did they start it as an open source product? Was it started as an actual product for Honeycomb that wasn't open source and then it was open sourced? How did all that come about? -**Phillip Carter:** Yeah, OpenTelemetry was founded by several folks... I don't think -- well, I don't remember if Liz was on the founding group or not. I don't think she was. But she was part of the initial governance committee. So basically, there were several open-source projects, like OpenTracing, OpenCensus, Jaeger, Zipkin, all solving various flavors of the same problem, but not quite completely enough, to the point where there needed to be another project spun up to do something in a slightly better way. And so people who worked on all of these things, and also other folks who worked at Splunk, or at the time, I think Morgan was at Google, and stuff, but... Several of these folks, they all got together and they're like "Hey, we're all solving anywhere from 50% to 75% of the problem space that we need to be solving for, and we're all doing it independently. Let's all get together and go to 100% of the problem space that we need to solve for, together, as one standard. Because a million different standards that are slightly incomplete for certain use cases that people have is like -- we're not going to grow, and the world of proprietary instrumentations from all the other vendors is just going to stay there... And then that has its own set of negative consequences that organizations actually do not like... But we're not meeting them where they are, so let's get together and do that." It was this consortium in about 2019 where they did this. +**Phillip Carter:** Yeah, OpenTelemetry was founded by several folks... I don't think -- well, I don't remember if Liz was on the founding group or not. I don't think she was. But she was part of the initial governance committee. So basically, there were several open source projects, like OpenTracing, OpenCensus, Jaeger, Zipkin, all solving various flavors of the same problem, but not quite completely enough, to the point where there needed to be another project spun up to do something in a slightly better way. And so people who worked on all of these things, and also other folks who worked at Splunk, or at the time, I think Morgan was at Google, and stuff, but... Several of these folks, they all got together and they're like "Hey, we're all solving anywhere from 50% to 75% of the problem space that we need to be solving for, and we're all doing it independently. Let's all get together and go to 100% of the problem space that we need to solve for, together, as one standard. Because a million different standards that are slightly incomplete for certain use cases that people have is like -- we're not going to grow, and the world of proprietary instrumentations from all the other vendors is just going to stay there... And then that has its own set of negative consequences that organizations actually do not like... But we're not meeting them where they are, so let's get together and do that." It was this consortium in about 2019 where they did this. **Autumn Nash:** I love that you got that many engineers and people good at things to like -- ot's like, for them to say "Okay, our projects maybe need some improvement, and we should work together." You got people that are engineers in a room to say that out loud? @@ -216,7 +216,7 @@ And every time I'd listen to Charity or Liz talk... Like, it doesn't matter what **Autumn Nash:** Not just that, but you need to be able to get the bigger picture of the data. People collect data all the time, and they have no idea what to do with all those logs, and what to do -- -**Justin Garisson:** \[00:40:09.10\] Yeah, you just keep zooming out. When you have a log, you're like "Oh, I'm going to print here, in my code." I'm like "Oh, I've found the piece. I'm terrible at GDB, so I'm just going to do this one print and I find it." But once you zoom out to those 10 things calling it, "Okay, how do those things call? Is this network related? Is this DNS?" It's always DNS... And then just keep zooming out... It's like "Okay, now how's the application? What does the customer experience look like?" Honeycomb seems like they always approach it from that side of it. What's Charity's line? "Uptime doesn't matter if your customer has a bad day", or something like that. If your customer is angry, then all the uptime in the world doesn't matter. +**Justin Garisson:** \[40:09\] Yeah, you just keep zooming out. When you have a log, you're like "Oh, I'm going to print here, in my code." I'm like "Oh, I've found the piece. I'm terrible at GDB, so I'm just going to do this one print and I find it." But once you zoom out to those 10 things calling it, "Okay, how do those things call? Is this network related? Is this DNS?" It's always DNS... And then just keep zooming out... It's like "Okay, now how's the application? What does the customer experience look like?" Honeycomb seems like they always approach it from that side of it. What's Charity's line? "Uptime doesn't matter if your customer has a bad day", or something like that. If your customer is angry, then all the uptime in the world doesn't matter. And so being able to just go back to those basics... But as an engineer, where do I look? How do I look there? How did you start building that ML/AI infrastructure stuff to get people in, to nudge them in the right direction? @@ -228,7 +228,7 @@ Imagine you're a new engineer brought into this eCommerce site, you don't know h Using the Honeycomb tool - we have this anomaly detection thing where you can... It's called Bubble Up, but I think now we're -- at any rate, you do this thing called the bubble up, and you can visually select this part of a graph that looks a little different than the others... And then it'll just automatically compare all of the events in the selection, versus what's not in the selection, and display out via literally histograms all the values of each of those events, and all the values that correspond to all the columns in each of those events, and sort them in a way that you can say "Oh, well, there's these five columns, and these handful of values that associate with these columns in my data that are actually the characteristics that associate most with this spike in latency that I see here." -\[00:44:08.26\] But I didn't have to know upfront that "Oh, there's this one attribute in my data that is the thing that I should be looking at." And it's this generalizable thing. It's this thing that works -- like, when people watch this flow, they're like "Wow, this is how I actually do want to be debugging. Because when I get onboarded onto a thing, I don't get onboarded onto what every line of code does first. I get onboarded onto "What is the purpose of this thing? What should it be doing? What matters the most to our business?" This is how, frankly, most organizations should probably be working anyways. +\[44:08\] But I didn't have to know upfront that "Oh, there's this one attribute in my data that is the thing that I should be looking at." And it's this generalizable thing. It's this thing that works -- like, when people watch this flow, they're like "Wow, this is how I actually do want to be debugging. Because when I get onboarded onto a thing, I don't get onboarded onto what every line of code does first. I get onboarded onto "What is the purpose of this thing? What should it be doing? What matters the most to our business?" This is how, frankly, most organizations should probably be working anyways. **Autumn Nash:** It is. And they're not. @@ -242,7 +242,7 @@ And so there's degrees to this, where it can actually work very, very well, espe This was something that like I experimented with in April of 2023. And in the course of doing that experimentation, I found "Wow, this thing actually does output really good stuff." And I know that there's going to be a very long tail of like it might not get something quite right, but... This shifted from "Is this even possible?" to "Oh, this is definitely possible." And this is probably also going to be useful for several people out of the box already. But what can we do to make it really useful to as many people as possible? And that's kind of like where my mindset shifted. And that's how the thing ultimately started into building our natural language querying system. -**Autumn Nash:** \[00:48:09.17\] You know what I think is really impressive about everything that you've said? First, the fact that you actually went to the conferences and listened to your customers... Like, the feedback loop of like customers... And especially, people go to conferences with their competition, and don't do -- I love going to the different booths and talking to their engineers and talking to their PMs, and the disconnect between their engineers or their PMs and their customers is wild. And then they never have the right information about the competitors that are 10 feet away from them, and they could have easily went to the booth and asked a question... And it blows my entire mind every time. +**Autumn Nash:** \[48:09\] You know what I think is really impressive about everything that you've said? First, the fact that you actually went to the conferences and listened to your customers... Like, the feedback loop of like customers... And especially, people go to conferences with their competition, and don't do -- I love going to the different booths and talking to their engineers and talking to their PMs, and the disconnect between their engineers or their PMs and their customers is wild. And then they never have the right information about the competitors that are 10 feet away from them, and they could have easily went to the booth and asked a question... And it blows my entire mind every time. That is the best part about being a solutions architect, a PM, or whatever you want to call it... Just being able to compare the product and see where -- like, half the time there's low-hanging fruit that would be so easy to fix or to know more about to make your products better. But also, you're using AI for something that's going to actually make people's lives better, not something that we didn't ask for, not in a way that you're handicapping your developers. @@ -254,7 +254,7 @@ So the way that you're describing it is so exciting, because you're teaching the **Phillip Carter:** Yeah, yeah. And that was definitely the goal. And we have much more ambitious goals around this thing as well. We have gotten feedback from people where like they do want to drive their querying process end-to-end, using natural language. And you can use Query Assistant to do that today. It's not as good at that. And frankly, that was a scoping exercise, because the primary problem that we were seeking to solve right then and there - which, frankly, I think was the right problem - is a lot of people would come in, try to use Honeycomb, but they'd see our interface and they'd be like "I just don't understand how I can start to use this. I can express how I would like to start out, but I don't know how to shape that in a Honeycomb query." And we've found some pretty good success metrics where people would come in, they would use the natural language query feature a few times; sometimes quite a lot, actually. And then they would start using our manual querying interface to manually tweak things and explore a little bit more without even using the natural language portion of it. Sometimes they would, and it does support that, but sometimes they wouldn't. And from our perspective, we're like "Great." We don't really care if you're using the AI feature or the non-AI feature. We want you to explore your data. And if this helped you get to the point where you could start exploring more and get more curious, great. That's the problem that we're solving. -**Autumn Nash:** \[00:52:04.22\] This is the biggest value prop that I hear you saying out of everything, is that you're making them more efficient, but you're... So it's like alarm fatigue, right? You're going to start to ignore your alarms if they're always going off. And when you get so much log data that it's so much that you just need to delete it, because you don't know where to start, and it's filling up all your disk space, or... You know what I mean? But what you're doing right now is you're helping them to focus and to break down a problem, and that's kind of all of what engineering is. You take these big problems, these big systems -- especially at scale, right? And you break down the problem so you can learn... +**Autumn Nash:** \[52:04\] This is the biggest value prop that I hear you saying out of everything, is that you're making them more efficient, but you're... So it's like alarm fatigue, right? You're going to start to ignore your alarms if they're always going off. And when you get so much log data that it's so much that you just need to delete it, because you don't know where to start, and it's filling up all your disk space, or... You know what I mean? But what you're doing right now is you're helping them to focus and to break down a problem, and that's kind of all of what engineering is. You take these big problems, these big systems -- especially at scale, right? And you break down the problem so you can learn... I feel like we always talk about observability and metrics, but it's so hard to get the good metrics, and figure out what you should be paying attention to, and figuring out how your system works, whether you're new, or just maybe one of your coworkers built something and now you have to go fix it, or somebody leaves. Especially with all the turnover in tech right now, we are going to have these huge systems at scale that you might not have built. @@ -276,7 +276,7 @@ In college, you sit there and you build these projects from scratch, but that's **Autumn Nash:** This shows that this product will outlive the AI hype. This is something worth investing in. There's always that turnaround cycle of "Oh, this new thing is cool", and then you onboard to it, and then you have to migrate off of it, and you're just stuck in this tech debt cycle... And I feel like this is like the value prop of why your product is worth giving a try to, worth investing in, and it's going to outlive this, because it actually delivers value to engineers, and it's not just a hype thing. It very much fits what we've been trying to teach engineers, in general. You need to be effective, break down problems, and just... You go and seek out that information. And this is just like taking, what we already were doing, but to a more effective level. -\[00:56:13.27\] A lot of times they're inventing the wheel just to do it, and just to make it very expensive... But you're not. You're actually just -- you're improving on the wheel. You know what I mean? So I think that's just really cool. +\[56:13\] A lot of times they're inventing the wheel just to do it, and just to make it very expensive... But you're not. You're actually just -- you're improving on the wheel. You know what I mean? So I think that's just really cool. **Phillip Carter:** Yeah. Yeah. And as you might imagine, we have many more things that we're looking to build, and investing in our own - I guess you'd call it like AI team that we have. But philosophically, we're very much aligned with the thing that we went in on last year, when we built the first version of this, of like "We're here to help." We recognize that people come into the product, and the problems that they come in with are multifaceted, and there's people of varying levels, where like some people want zero assistance, and that's great. The product is amazing for them. People want some assistance... @@ -308,7 +308,7 @@ There's things that we can probably even suggest. "Well, based off of the shape **Autumn Nash:** So you have very strong coffee, and... -**Phillip Carter:** \[00:59:45.03\] Yeah, it's strong, and - that's something in the coffee roasting community, because of course, there's a community for this... People really, really nerd out on it. It's amazing. There's all of these curves that you can draw about different dimensions of the coffee, and you can optimize for a particular aspect of it, depending on how you roast it, when you roast it, how much you let it degas, or as I call them, bean farts - degas its CO2. +**Phillip Carter:** \[59:45\] Yeah, it's strong, and - that's something in the coffee roasting community, because of course, there's a community for this... People really, really nerd out on it. It's amazing. There's all of these curves that you can draw about different dimensions of the coffee, and you can optimize for a particular aspect of it, depending on how you roast it, when you roast it, how much you let it degas, or as I call them, bean farts - degas its CO2. **Autumn Nash:** I love that you're nerding out about coffee the same way that you nerd out about technology.