Smashing Podcast Episode 22 With Chris Coyier: What Is Serverless?
Today, we’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? I spoke to Chris Coyier to find out.
- Chris’ microsite The Power of Serverless for Front-end Developers
- Chris on Twitter
- ShopTalk Show podcast
- “Setting Up Redux For Use In A Real-World Application,”
by Jerry Navi
- “Can You Design A Website For The Five Senses?,”
by Suzanne Scacca
- “Creating A Static Blog With Sapper And Strapi,”
by Daniel Madalitso Phiri
- “A Practical Guide To Product Tours In React Apps,”
by Blessing Krofegha
- “How To Create A Porsche 911 With Sketch ,”
by Nikola Lazarević
Drew McLellan: He’s a web designer and developer who you may know from CSS-Tricks, a website he started more than 10 years ago and that remains a fantastic learning resource for those building websites. He’s the co-founder of CodePen, the browser based coding playground and community used by front-enders all around the world to share what they make and find inspiration from those they follow. Alongside Dave Rupert is the co-host of ShopTalk Show, a podcast all about making websites. So we know he knows a lot about web development, but did you know he once won a hot dog eating competition using only his charm? My smashing friends, please welcome Chris Coyier. Hello Chris, how are you?
Chris Coyier: Hey, I’m smashing.
Drew: I wanted to talk to you today not about CodePen, and I don't necessarily want to talk to you about CSS-Tricks, which is one of those amazing resources that I’m sure everyone knows appears right at the top of Google Search results when looking for answers about any web dev question. Up pops your face and there’s a useful blog post written by you or one of your guest contributors.
Chris: Oh, I used to actually do that. There was a... I don't know, it probably was during the time of when Google had that weird social network. What was that? Google Plus?
Drew: Oh, Plus, yeah.
Chris: Yeah, where they would associate a website with a Plus account, and so my Plus account had an avatar, and the avatar was me, so it would show up in search results. I think those days are gone. I think if you...
Drew: I think so, yeah-
Drew: But I kind of wanted to talk to you about something that has been a little bit more of a sort of side interest of yours, and that’s this concept of serverless architectures.
Chris: Mm (affirmative).
Drew: This is something you’ve been learning sort of more about for a little while. Is that right?
Drew: Seems like a whole new world has opened up, whereas if you were just a front-end coder... I say, just a front-end coder, I shouldn't. If you’re a front-end coder, and you’re used to working with a colleague or a friend to help you with the back-end implementation, suddenly that’s opened up. And it’s something that you can manage more of the whole stack yourself.
Chris: Yeah, yeah. That’s it.
Drew: Addressing the elephant in the room, right at the top. We’re talking about serverless, and obviously, naming things is hard. We all know that. Serverless architecture doesn't mean there are no servers, does it?
Chris: I think it’s mandatory, like if this is the first podcast you’re hearing of it, or in the first... you’re only hearing the word "serverless" in the first dozen times you ever heard it, it’s mandatory that you have a visceral reaction and have this kind of, "Oh, but there are still servers." That’s okay. If that’s happening to you right now, just know that, that’s a required step in this. It’s just like anything else in life. There’s stages to understanding. The first time you hear something, you’re required to kind of reject it a little bit, and then only after a dozen times or so, or after it’s proven its worth a little bit to you, do you get to enter the further stages of understanding here. But the word has won, so if you’re still fighting against the word "serverless", I hate to tell you, that the train has left the station there. The word is already successful. You’re not going to win this one. So, sorry.
Chris: It happens just at the CDN level, which I guess is a server, but I tend to not think of CDNs as a server. Not as obviously as something else. It’s starting to feel even more serverless-y lately. Is a CDN a server? I mean, I guess it’s a computer somewhere, but it feels like even less server-y.
Drew: It feels like, yes, a CDN may be a server, but it’s the most sort of minimal version of a server. It’s like a thin server, if you like.
Chris: Yeah. Sure.
Drew: All right. I’ve heard it said... I can't remember the source to credit, unfortunately, but I’ve heard serverless described as being "like using a ride-sharing service like Uber or Lyft" or whatever. You can be carless and not own a car, but that doesn't mean you never use a car.
Chris: Yeah, it doesn't mean cars don't exist. Mm, that’s nice.
Drew: You just summon one when you need it, but at the same time, you’re not paying the upfront purchase cost of a car. You’re not paying maintenance or fuel or-
Chris: Right, and the pricing makes sense, too, right? That’s nice. That’s a nice analogy, I think. And then, because it’s at the CDN level too, it just intercepts HTTP requests that are already happening, which means you don't ask it... you don't send a request to it and it sends a request back. It’s just happening during the request naturally, which also makes it feel less server-y. I don't know, it’s interesting. It’s interesting for sure. So that’s a big deal, though, that you brought up the pricing thing. That you only pay for what you use. That’s significant too, because... let’s say, you’re a back-end dev, who’s used to spinning up servers their whole life. And they run the costs, "I need this kind of server with this kind of memory and this kind of CPU and these kind of specs. And this is how much it’s going to cost." Serverless comes along and chops the head off of that pricing.
Chris: So, even if you’re a back-end dev who just doesn't like this that much, that they’re just not into it, like your skill set is just what it is over the years, you compare the price and you’re like, "What? I could be paying 1% of what I was paying before?" You are not allowed to not care about that, right? If you’re this back-end dev that’s paying a hundred times more for their service than they need to be paying, you’re just kind of bad at your job then. Sorry to say. This has come along and this has shattered pricing in a lot of ways. You have to care about that. And it’s kind of cool that somebody else is... It’s not like you don't have to worry about security at all, but it’s not your server. You don't have... your lambda or cloud function, or your worker, or whatever, isn't sitting on a server that’s right next to some really sensitive data on your own network. It’s not right next to your database.
Chris: If somebody writes code that somehow tries to eject itself from the worker or the lambda, or whatever, and try to get access to other things in their way, there’s nothing there to get. So the security’s a big deal too, so again, if that’s your job as the server admin, is to deal with the security of this thing. Running it, running certain things in Lambda, you just get some natural security from it, which is great. So, it’s way cheaper. It’s way more secure. It encourages these small modular architecture, which can be a good idea. It seems to be domino after domino of good ideas here. That’s why it’s notable. You know?
Drew: Yeah, I mean, traditionally with a server based architecture that we’ve been running for decades on the web, you have a web server that you run yourself. It holds your front-end code, your back-end code, your database and everything. Then you need to maintain that and keep it running and pay the bills, and even if it’s not being used, it’s there clocking up bills. The user would make a request and it would build all that HTML query stuff from the database, send it all down the line to the browser. That process works. It’s how loads of things are built. It’s probably the majority of how the web is built. It’s how things like WordPress work. Is this really a problem that we need to solve? I mean, we’ve talked about costs a little bit. What are the other sort of problems with that, that we're... that we need to address, and that serverless might help us with?
Chris: Yeah, the problems with the old school approach. Yeah, I don't know, maybe there isn't any. I mean, I’m not saying the whole web needs to change their whole... the whole thing overnight. I don't know. Maybe it doesn't really, but I think it opens up doors. It just seems like, when good ideas arrive like this, they just slowly change how the web operates at all. So, if there’s some CMS that is built in some way that expects a database to be there, it means that maybe the hosts of the future will start leveraging this in interesting ways. Maybe it feels to you like it’s still just a traditional server, but the hosts themselves have farmed it out, how they operate, to serverless architectures. So you don't even really know that that’s happening, but they’ve found a way to slash their costs by hosting the stuff that you need in serverless ways. Maybe yeah don't even need to care as a developer, but at a meta level, that’s what’s happening. Maybe. I don't know.
Chris: It also doesn't mean that... Databases are still there. If it turns out that architecturally having a relational database is the correct way to store that data, great. I mention that because this world of Serverless is kind of growing up at the same time that JAMstack is. And JAMstack is this architecture that's, "You should be serving your website off of static hosts, that run nothing at all except for..." They’re like little CDNs. They’re like, "I can do nothing. I don't run PHP. I don't run Ruby. I run nothing. I run on a tiny little web server that’s just designed to serve static files only."
Drew: I suppose you don't have to wholesale... be looking at one architecture or another. There’s an area in the middle where parts of an infrastructure might be more traditional and parts could be serverless, I’m guessing?
Chris: Yeah. Well, they’re trying to tell you that anyway. Anybody that wants to sell you any part of their architecture is like, "You don't have to buy in all right now. Just do it a little bit." Because of course, they want you to dip your toe into whatever they’re selling, because once you dip the toe, the chances that you splash yourself into the pool is a lot higher. So, I think that... it’s not a lie, though, necessarily, although I find a little less luck in... I don't want my stack to be a little bit of everything. I think there’s some technical death there that I don't always want to swallow.
Drew: Mm (affirmative).
Chris: But it’s possible to do. I think the most quoted one is... let’s say I have a site that has an eCommerce element to it, which means... and let’s say large scale eCommerce, so 10,000 products or something, that this JAMstack architecture hasn't gotten to the point where that’s always particularly efficient to rebuild that statically. So, the thinking goes, "Then don't." Let that part kind of hydrate naturally with... hit serverless functions and get the data that it needs, and do all that. But the rest of the site, which isn't... there’s not as many pages, there’s not as much data, you could kind of pre-render or whatever. So a little bit of both.
Drew: Of course, plenty of people are dealing with legacy systems that... some old database thing that was built in the 2000s that they may be able to stick a sort of JSON API layer on top of...
Drew: ... and build something more modern, and perhaps serverless, and then still interact with those legacy systems by sort of gluing it altogether in a weird way.
Chris: Yeah. I like that though, isn't it? Aren't... most websites already exist. How many of us are totally green-fielding websites? Most of us work on some crap that already exists that needs to be dragged into the future for some reason, because I don't know, developers want to work faster, or you can't hire anybody in COBOL anymore, or whatever the story is. You know?
Drew: So terminology wise, we’re talking about JAMstack which is this methodology of running a code pretty much in the browser, serving it from a CDN. So, not having anything dynamic on the server. And then when we talk about serverless, we’re talking about those small bits of functionality that run on their server somewhere else. Is that right? That we were talking about these cloud function kind of-
Chris: Yeah, I mean, they just happen to be both kind of hot ideas right now. So it’s kind of easy to talk about one and talk about the other. But they don't necessarily need to be together. You could run a JAMstack site that has nothing to do with serverless anything. You’re just doing it, you just pre-build the site and run it, and you can use serverless without having to care about JAMstack. In fact, CodePen does nothing JAMstack at all. Not that we want to talk about CodePen necessarily, but it’s a Ruby on Rails app. It runs on a whole bunch of AWS EC2 instances and a variety of other architecture to make it happen. But we use serverless stuff whenever we can for whatever we can, because it’s cheap and secure, and just a nice way to work. So, no JAMstack in use at all but serverless all over the place.
Drew: That’s quite interesting. What sort of tasks are you putting serverless to on CodePen?
Chris: Theoretically, you could do that in the client. But these libraries that do pre-processing are pretty big. I don't think I want to ship the entire Sass library to you, just to run that thing. I don't want to... it’s just not, that’s not the right architecture for this necessarily. Maybe it is down the road, I mean, we could talk about offline crap, yada, yada, Web Workers. There’s a million architectural things we could do. But here’s how it does work now, is there’s a lambda. It processes Sass. It has one tiny, tiny, tiny, little job.
Chris: You send it this blob of Sass and it sends you stuff back, which is the processed CSS, maybe a site map, whatever. It has one tiny little job and we probably pay for that lambda, like four cents or something. Because lambdas are just incredibly cheap and you can hammer it too. You don't have to worry about scale. You just hit that thing as much as you want and your bill will be astonishingly cheap. There is moments where serverless starts to cross that line of being too expensive. I don't know what that is, I’m not that master of stuff like that. But generally, any serverless stuff we do, we basically... all nearly count as free, because it’s that cheap. But there’s one for Sass. There’s one for Less. There’s one for Babbel. There’s one for TypeScript. There’s one for... All those are individual lambdas that we run. Here’s some code, give it to the lambda, it comes back, and we do whatever we’re going to do with it. But we use it for a lot more than that, even recently.
Chris: Here’s an example. Every single Pen on CodePen has a screenshot. That’s kind of cool, right? So, the people make a thing and then we need a PNG or a JPEG, or something of it, so that we can... that way when you tweet it, you get the little preview of it. If you share it in Slack, you get the little preview of it. We use it on the website itself to render... instead of an iframe, if we could detect that the Pen isn't animated, because an iframe’s image is much lighter, so why not use the image? It’s not animated anyway. Just performance gains like that. So each of those screenshots has a URL to it, obviously. And we’ve architected it so that that URL is actually a serverless function. It’s a worker. And so, if that URL gets hit, we can really quickly check if we’ve already taken that screenshot or not.
Chris: That’s actually enabled by CloudFlare Workers, because CloudFlare Workers are not just a serverless function, but they have a data store too. They have this thing called key-value store, so the ID of that, we can just check really quick and it’ll be, "True or false, do you have it or not?" If it’s got it, it serves it. And it serves it over CloudFlare, which is super fast to begin with. And then gives you all this ability too. Because it’s an image CDN, you can say, "Well, serve it in the optimal format. Serve it as these dimensions." I don't have to make the image in those dimensions. You just put the dimensions in the URL and it comes back as that size, magically. So that’s really nice. If it doesn't have it, it asks another serverless function to make it really quick. So it’ll make it and then it’ll put it in a bucket somewhere... because you have to have a origin for the image, right? You have to actually host it somewhere usually. So we put it in an S3 bucket real quick and then serve it.
Chris: So there’s no queuing server, there’s no nothing. It’s like serverless functions manage the creation, storage and serving of these images. And there’s like 50 million or 80 million of them or something. It’s a lot, so it handles that as scale pretty nicely. We just don't even touch it. It just happens. It all happens super fast. Super nice.
Drew: I guess it... well, a serverless function is ideally going to suit a task that needs very little knowledge of state of things. I mean, you mentioned CloudFlare’s ability to store key-value pairs to see if you’ve got something cached already or not.
Chris: Yeah. That’s what they’re trying to solve, though, with those. Those key-value pairs, is that... I think that traditionally was true. They’re like, "Avoid state in the thing," because you just can't count on it. And CloudFlare Workers are being like, "Yeah, actually, you can deal with state, to some degree." It’s not as fancy as a... I don't know, it’s key values, so it’s a key in a value. It’s not like a nested, relational fancy thing. So there’s probably some limits to that. But this is baby days for this. I think that stuff’s going to evolve to be more powerful, so you do have some ability to do some state-like stuff.
Drew: And sometimes the limitation, that sort of limited ability to maintain state, or the fact that you have no... you want to maintain no state at all, kind of pushes you into an architecture that gives you this sort of... Well, when we talk about the software philosophy of "Small Pieces Loosely Joined", don't we?
Chris: Mm (affirmative).
Drew: Where each little component does one thing and does it well. And doesn't really know about the rest of the ecosystem around it. And it seems that really applies to this concept of serverless functions. Do you agree?
Chris: Yeah. I think you could have a philosophical debate whether that’s a good idea or not. You know? I think some people like the monolith, as it were. I think there’s possible... there’s ways to overdo this and to make too many small parts that are too hard to test altogether. It’s nice to have a test that’s like, "Oh, I wonder if my Sass function is working. Well, let’s just write a little test for it and make sure that it is." But let’s say, what matters to the user is some string of seven of those. How do you test all seven of them together? I think that story gets a little more complicated. I don't know how to speak super intelligently to all that stuff, but I know that it’s not necessarily that, if you roll with all serverless functions that’s automatically a better architecture than any other architecture. I like it. It reasons out to me nicely, but I don't know that it’s the end-all be-all of all architectures. You know?
Chris: It’s nice.
Drew: ... sort of idea. But, one thing we know about the web, is it’s designed to be resilient because network’s fragile.
Chris: Mm (affirmative).
Drew: How robust is the sort of serverless approach? What happens if something... if one of those small pieces goes away?
Chris: That would be very bad. You know? It would be a disaster. Your site would go down just like any other server, if it happens to go down, I guess.
Drew: Are there ways to mitigate that, that are particularly -
Chris: I don't know.
Drew: ... suited to this sort of approach, that you’ve come across?
Chris: I actually don't know. Maybe you know some strategies that I don't, on resiliency of serverless.
Drew: I guess there’s a failure mode, a style of failure, that could happen with serverless functions, where you run a function once and it fails, and you can run it a second time immediately afterwards and it would succeed, because it might hit a completely different server. Or whatever the problem was, when that run may not exist on a second request. The issues of an entire host being down is one thing, but maybe there are... you have individual problems with the machine. You have a particular server where its memory has gone bad, and it’s throwing a load of errors, and the first time you hit it, it’s going to fail. Second time, that problem might have been rooted around.
Chris: Companies that tend to offer this technology, you have to trust them, but they also happen to be the type of companies that... this is their pride. This is the reason why people use them is because they’re reliable. I’m sure people could point to some AWS outages of the past, but they tend to be a little rare, and not super common. If you were hosting your own crap, I bet they got you beat from an SLA percentage kind of level. You know? So it’s not like, "Don't build in a resilient way," but generally the type of companies that offer these things are pretty damn reliable. The chances of you going down because you screwed up that function are a lot higher than because their architecture is failing.
Drew: I suppose, I mean, just like anything where you’re using an API or something that can fail, is just making sure you structure your code to cope with that failure mode, and to know what happens next, rather than just throwing up an error to the user, or just dying, or what have you. It’s being aware of that and asking the user to try again. Or trying again yourself, or something.
Chris: Yeah, I like that idea of trying more than once, rather than just being, "Oh no. Fail. Abort." "I don't know, why don't you try again there, buddy?"
Drew: So I mean, when it comes to testing and development of serverless functions, sort of cloud functions, is that something that can be done locally? Does it have to be done in the cloud? Are there ways to manage that?
Chris: It’s a little different story when you’re talking about an HTTP request to it, that’s the thing that you’re trying to test. Does it respond to the request properly? And does it return the stuff properly? I don't know. The network might get involved there. So you might want to write tests at that level. That’s fine. I don't know. What is the normal story there? You spin up some kind of local server or something that serves it. Use Postman, I don't know. But there's... Frameworks try to help too. I know that the serverless ".com", which is just terribly confusing, but there’s literally a company called Serverless and they make a framework for writing the serverless functions that helps you deploy them.
Chris: So if you like NPM install serverless, you get their framework. And it’s widely regarded as very good, because it’s just very helpful, but they don't have their own cloud or whatever. You write these and then it helps you get them to a real lambda. Or it might work with multiple cloud providers. I don't even know these days, but their purpose of existing is to make the deployment story easier. I don't know what... AWS is not renowned for their simplicity. You know? There’s all this world of tooling to help you use AWS and they’re one of them.
Chris: They have some kind of paid product. I don't even know what it is exactly. I think one of the things they do is... the purpose of using them is for testing, is to have a dev environment that’s for testing your serverless function.
Chris: Yeah. I mean, if you want to use no tooling at all, I think they have a really... like AWS, specifically, has a really rudimentary GUI for the thing. You can paste the code in there and hit save and be like, "Okay, I guess it’s live now." That’s not the best dev story, but I think you could do it that way. I know CloudFlare workers have this thing called Wrangler that you install locally. You spin it up and it spins up a fake browser on the top and then dev tools below. Then you can visit the URL and it somehow intercepts that and runs your local cloud function against it. Because one of the interesting things about workers is... you know how I described how it... you don't hit a URL and then it returns stuff. It just automatically runs when you... when it intercepts the URL, like CDN style.
Chris: So, one of the things it can do is manipulate the HTML on the way through. The worker, it has access to the complete HTML document. They have a jQuery-esque thing that’s like, "Look for this selector. Get the content from it. Replace it with this content. And then continue the request." So you can mess with code on the way through it. To test that locally, you’re using their little Wrangler tool thing to do that. Also, I think the way we did it was... it’s also a little dangerous. The second you put it live, it’s affecting all your web traffic. It’s kind of a big deal. You don't want to screw up a worker. You know? You can spin up a dev worker that’s at a fake subdomain, and because it’s CloudFlare, you can... CloudFlare can just make a subdomain anyway. I don't know. It’s just kind of a nice way to do a... as you’re only affecting sub-domain traffic, not your main traffic yet. But the subdomain’s just a mirror of a production anyway, so that’s kind of a... that’s a testing story there.
Chris: It brings up an interesting thing, though, to me. It’s like... imagine you have two websites. One of them is... for us it’s like a Ruby on Rails app. Whatever. It’s a thing. But we don't have a CMS for that. That’s just like... it’s not a CMS, really. I think there’s probably Ruby CMSs, but there’s not any renowned ones. You know? It seems like all the good CMSs are PHP, for some reason. So, you want a quality CMS. Drew, you’ve lived in the CMS market for a long time -
Chris: So it’s kind of cool. I think you can imagine a fetch request on the browser probably takes, I don't know, a second and a half or something. It probably takes a minute to do it. But because these are... site B is hosted on some nice hosting and Cloudflare has some... who knows what kind of super computers they use to do it. They do. Those are just two servers talking to each other, and that fetch request happens just so super duper, duper fast. It’s not limited to the internet connection speed of the user, so that little request takes like two milliseconds to get that data. So it’s kind of this cool way to stitch together a site from multiple sources and have it feel like, and behave like, a server rendered page. I think there’s a cool future to that.
Drew: Are there any sort of conventions that are sort of springing up around serverless stuff. I’m sort of thinking about how to architect things. Say I’ve got something where I want to do two sort of requests to different APIs. I want to take in a postal address and geocode it against one, and then take those coordinates and send that to a florist who’s going to flower bomb my front yard or something. How would you build that? Would you do two separate things? Or would you turn that into one function and just make the request once from the browser?
Chris: Mm (affirmative). That’s a fascinating question. I’d probably have an architect function or something. One function would be the one that’s in charge of orchestrating the rest of them. It doesn't have to be, your website is the hub and it only communicates to this array of single sources. Serverless functions can talk to other serverless functions. So I think that’s somewhat common to have kind of an orchestrator function that makes the different calls and stitches them together, and returns them as one. I think that is probably smart and faster, because you want servers talking to servers, not the client talking to a whole bunch of servers. If it can make one request and get everything that it needs, I think that’s probably generally a good idea-
Drew: Yeah, that sounds smart. Yep.
Chris: But I think that’s the ultimate thing. You get a bunch of server nerds talking, they’ll talk about the different approaches to that exact idea in 10 different ways.
Drew: Yeah. No, that sounds pretty smart. I mean, you mentioned as well that this approach is ideal if you’re using APIs where you’ve got secret information. You’ve got API keys or something that you don't want to live in the client. Because I don't know, maybe this florist API charges you $100 dollars every time flower bomb someone.
Drew: You can basically use those functions to almost proxy the request and add in the secret information as it goes, and keep it secret. That’s a viable way to work?
Chris: Yeah, yeah. I think so. I mean, secrets are, I don't know, they’re interesting. They’re a form of buy in I think to whatever provider you go with, because... I think largely because of source control. It’s kind of like, you could just put your API key right in the serverless function, because it’s just going to a server, right? You don't even have to abstract it, really. The client will never see that code that executes, but in order for it to get there, there’s probably a source control along the way. It’s probably like you commit to master, and then master... then some kind of deployment happens that makes that thing go to the serverless function. Then you can't put your API key in there, because then it’s in the repo, and you don't put your API keys in repos. That’s good advice. Now there’s stuff. We’ve just done... at CodePen recently, we started using this git-crypt thing, which is an interesting way to put keys safely into your repos, because it’s encrypted by the time anybody’s looking at that file.
Chris: But only locally they’re decrypted, so they’re useful. So it’s just kind of an interesting idea. I don't know if that helps in this case, but usually, cloud providers of these things have a web interface that's, "Put your API keys here, and we’ll make them available at runtime of that function." Then it kind of locks... it doesn't lock you in forever but it kind of is... it’s not as easy to move, because all your keys are... you put in some input field and some admin interface somewhere.
Drew: Yeah, I think that’s the way that Netlify manage it.
Chris: They all do, you know?
Drew: Yeah. You have the secret environment variables that you can set from the web interface. That seems to work quite nicely.
Chris: Yeah, right. But then you got to leave... I don't know, it’s not that big of a deal. I’m not saying they’re doing anything nefarious or anything. How do you deal with those secrets? Well, it’s a hard problem. So they kind of booted it to, I don't know, "Just put them in this input field and we’ll take care of it for you, don't worry about it."
Drew: Is there anything that you’ve seen that stands out as an obvious case for things that you can do with serverless, that you just couldn't do with a traditional kind of serverfull approach? Or is it just taking that code and sort of almost deploying it in a different way?
Chris: It’s probably mostly that. I don't know that it unlocks any possibility that you just absolutely couldn't run it any other way. Yeah, I think that’s a fair answer, but it does kind of commoditize it in an interesting way. Like, if somebody writes a really nice serverless function... I don't know that this exists quite yet, but there could kind of a marketplace, almost, for these functions. Like, I want a really good serverless function that can take a screenshot. That could be an open source project that lots of eyeballs around, that does a tremendously good job of doing it and solves all these weird edge cases. That’s the one I want to use. I think that’s kind of cool. You know? That you can kind of benefit from other people’s experience in that way. I think that will happen more and more.
Chris: It doesn't matter on a server. So, I could be like, "Hmm, well, I’ll just do it in Node then." I’ll have a statement that says, "Words equal require words," or whatever, and a note at the top, "Have it randomize a number. Pull it out of the array and return it." So that serverless function is eight lines of code with a packaged@JSON that pulls in this open source library. And then my front-end code, there’s a URL to the serverless function. It hits that URL. The URL returns one word or a group of words or whatever. You build your own little API for it. And now, I have a really kind of nice, efficient thing. What was nice about that is, it’s so simple. I’m not worried about the security of it. I don't... you know?
Chris: You kind of like moved the script from one folder to the other. And that one happens to get deployed as a serverless function instead. How cool is that? You know? You’re using the same exact skill set, almost. There’s still some rough edges to it, but it’s pretty close.
Drew: It’s super cool. You’ve put together a sort of little micro site all about these ideas, haven't you?
Chris: Yeah. I was a little early to the game. I was just working on it today, though, because... it gets pull requests. The idea... well, it’s at serverless.css-tricks.com and... there’s a dash in CSS-Tricks, by the way. So it’s a subdomain of CSS-Tricks, and I built it serverlessly too, so this is... CSS-Tricks is like a WordPress site, but this is a static site generator site. All the content of it is in the GitHub repo, which is open-source. So if you want to change the content of the site, you can just submit a poll request, which is nice because there’s been a hundred or so of those over time. But I built all the original content.
Drew: It’s a super useful place, because it lists... If you’re thinking, "Right, I want to get started with serverless functions," it lists all the providers who you could try it and...
Chris: That’s all it is, pretty much, is lists of technology. Yeah.
Drew: Which is great, because otherwise, you’re just Googling for whatever and you don't know what you’re finding. Yeah, it’s lists of API providers that help you do these sorts of things.
Chris: Forms is one example of that, because... so the minute that you choose to... let’s say, you’re going to go JAMstack, which I know that’s not necessarily the point of this, but you see how hand in hand they are. All of a sudden, you don't have a PHP file or whatever to process that form with. How do you do forms on a JAMstack site? Well, there’s any number of ways to do it. Everybody and their sister wants to help you solve that problem, apparently. So I think if I was the inventor of the word JAMstack, so they try to help you naturally, but you don't have to use them.
Chris: In fact, I was so surprised putting this site together. Let’s see. There’s six, nine, twelve, fifteen, eighteen, twenty one, twenty two services out there, that want to help you serverlessly process your forms on this site right now. If you want to be the 23rd, you’re welcome to it, but you have some competition out there. So the idea behind this is that you write a form in HTML, like literally a form element. And then the action attribute of the form, it can't point anywhere internally, because there’s nothing to point to. You can't process, so it points externally. It points to whatever they want you to point it to. They’ll process the form and then they tend to do things that you’d expect them to, like send an email notification. Or send a Slack thing. Or then send it to Zapier and Zapier will send it somewhere else. They all have slightly different feature sets and pricing and things, but they’re all trying to solve that problem for you, like, "You don't want to process your own forms? No problem. We’ll process it for you."
Drew: Yeah, it’s a super useful resource. I’d really recommend everyone check it out. It’s serverless.css-tricks.com. So, I’ve been learning all about serverless. What have you been learning about lately, Chris?
Chris: Well, I’m still very much in this world too and learning about serverless stuff. I had an idea to... I used to play this online role playing game ages ago. I just recently discovered that it’s still alive. It’s a text based medieval fantasy kind of game. I played it when AOL was a thing, because AOL wanted to have these games that you had to be logged on to play it, because they wanted you to spend hours and hours on AOL, so they could send you these huge bills, which was, I’m sure, why they did so well at some point.
Drew: So billing by the second. Yeah.
Chris: Yeah. So games was big for them. If they could get you playing games with other people on there. So this game kind of... it didn't debut there, but it moved to AOL, because I’m sure they got a juicy deal for it, but it was so... I mean, it’s just, couldn't possibly be nerdier. You’re a dwarven mage and you get rune staff from your leather sheath. And you type commands into it like a terminal. Then the game responds to you. I played that game for a very long time. I was very into it. I got into the community of it and the spirit of it. It was kind of a... it was like I was just alone by myself at my computer, but yet I look back on that time in my life, and be like, "That was a wonderful time in my life." I was really... I just liked the people and the game and all that. But then I grew up and stopped playing it, because life happens to you.
Chris: I only found out recently, because somebody started doing a podcast about it again... I don't know how I came across it, but I just did. I was like, "This game is alive and well in today’s world, are you kidding me? This text based thing." And I was more than happy to reactivate and get my old characters back and play it. But only to find out that the clients that they have you download for this game, haven't evolved at all. They are awful. They almost assume that you’re using Windows. There’s just these terribly cheesy poorly rendering... and it’s text based, you think it’d at least have nice typography. No. So I’m like, "I could be involved. I could write a client for this game. Put beautiful typography in it." Just modernize the thing, and I think the players of the game would appreciate it, but it felt overwhelming to me. "How can I do it?" But I find some open source projects. One of them is like... you can play the game through an actual terminal window, and it uses some open source libs to kind of make a GUI out of a terminal window.
Chris: I don't know. So that was kind of cool. I was like, "If they wrote that, there must be code in there to how to connect to the game and get it all going and stuff. So at least I have some starter code." I was trying to go along the app, "Maybe I’ll do it in Flutter or something," so the final product app would work on mobile phones and, "I could really modernize this thing." But then I got overwhelmed. I was like, "Ah, this is too big a... I can't. I’m busy." But I found another person who had the same idea and they were way further along with it, so I could just contribute on a design level. And it’s been really fun to work on, but I’ve been learning a lot too, because it’s rare for me to jump into a project that’s somebody else’s baby, and I’m just contributing to a little bit, and that has totally different technology choices than I would have ever picked.
Chris: It’s an Electron app. They picked that, which is also kind of a cool way to go too, because it’s my web skills... so I’m not learning anything too weird, and it’s cross-platform, which is great. So, I’ve been learning a lot about Electron. I think it’s fun.
Drew: That’s fascinating. It’s always amazing how little side projects and things that we do for fun, end up being the place where we sometimes learn the most. And learn skills that can then feed back into our sort of daily work.
Drew: That’s fascinating-
Chris: Pretty cool.
Drew: Yeah. If you, dear listener, would like to hear more from Chris, you can find him on Twitter, where he’s @chriscoyier. Of course, CSS-Tricks can be found at css-tricks.com and CodePen at codepen.io. But most of all, I recommend that you subscribe to the ShopTalk Show podcast if you haven't already done so, at shoptalkshow.com. Thanks for joining us today, Chris. Do you have any parting words?
Chris: Smashingpodcast.com. I hope that’s the real URL.