Amit Jotwani, a Developer Educator at DigitalOcean, grapples with the challenge of engaging developers in an era when AI tools frequently replace manual documentation searches. He shares insights from his sabbatical, during which he built several applications and relied less on traditional resources like documentation and forums while navigating new technologies. As a result, Amit now advocates for creating self-contained, easily digestible documentation that caters to a new generation of developers who expect quick, accurate information directly from AI-powered tools.
Amit Jotwani: My name is Amit. I am a developer educator at DevRelCon. This is my second DevRelCon. I was here last year, and I'm thrilled to be here again today. I'm gonna talk about a topic, DevRel without a developer.
This is my best shot at Gemini playing last night with a lonely mascot which who has apparently so it's cut out a little bit, but there's documentation in the background that's just like lonely as well. And it's he's waiting for developers, but there are no developers because there's somebody else who's been visiting these docs last last couple years. So that's the that's the premise of this talk. And I wanna start off with I I just wanna make sure we are on the same page. This is what we have all been breathing last decade, two decades.
This is our am am I on the same page? This is what we had. Page views, block traffic, installs of the SDK, how many people are typing messages into Discord servers, how many people are visiting your docs? This is what we've grown up with. Right?
This has been the mantra. This has been our signal. And the premise was or the promise was someone's here. They're looking for stuff. They're trying stuff.
When they need help, they will ask us. That was the mantra. Right? Someone's here, someone's trying, someone needs help. So we can adjust our docs, we can adjust our strategy, we can adjust everything according to that.
Because they're here, they're gonna tell us or we will find out where they're faltering. Well, something's changed in the last year or so. We all know this. We've been talking about all conference. LLMs and AI agents and whatever you wanna call them are here, and they're often the first one to hit your docs.
They're often the first one to hit your SDKs. So there's not a lot of visibility that you're getting into it. Now, I wanna preface this by saying, I don't know a lot of things. I don't know. I'm probably wrong about a lot of these things.
It's quite possible. I am often wrong about a lot of things. My wife reminds me of that pretty regularly. But before we get into what the new realm of this developer relations looks like, a little bit about me. I've been in developer relations for about a decade.
I live in New York for about fourteen years now. And of those fourteen years, the last year, we took a sabbatical and we traveled around the world with my two year old then, who is three now, and my wife. And it was an amazing, amazing time. It was stress it was it was stressful, I'm not gonna lie. But it was amazing because of what for the first time ever since I started my career, and ever since definitely when I started developer relations, for the first time, I was just a developer again.
I didn't care. I didn't have any affiliations with any organization. I was just playing around with stuff. I was just building random stuff. And it turned out that LLNs actually timed out with the sabbatical pretty pretty well.
I could I felt invincible, and I don't mean that in an arrogant sort of way, but I felt like I was just unshackled. Things that were just beyond me, I could use those things. I could build with those things. I built some iOS apps. I built my first Mac app.
This is I was sitting around at cafes in Thailand, and I was like, I need to know what speed I'm getting at these cafes. So I built this Mac app that tells me what speed I'm getting. I built a service called How Big is a Baby, which sends you these monopoly card like images telling you how big is the baby in the in the belly while my wife was pregnant. I built EventMate, is the SMS service that you may have been using throughout this event. And these were all built during this one year.
And what I realized as I built some of these things is that I was building all sorts of things. Things were just things that were just beyond me. LLMs were allowing me to do that, which made me think that there are a lot of other people who are gonna get unshackled with things that were initially not available to them. And I was one of them when it came to Swift, for example. Swift UI, every even now, every time I do Swift UI, I feel the empathy for someone who may be starting off with web development.
Because these LLMs, white coding, as you may recall them, but they can take you on this wild journey pretty quick. And I've seen that with Swift for me personally. So that was the first thing. The second was that I was visiting the docs less and less, and this is a year ago. Like, we moved on a lot from then.
But I noticed that I was not using the docs, even for things that I had a pretty decent knowledge, but I was getting stuck, like Twilio or Assembly AI. These were the APIs that I were just my go to. I was not going to them because l m LMs was just giving me that information. So that was one. But on the another LMs were actually very helpful, until they were not.
They would hallucinate, and they would just fill in the gaps. Because whenever it doesn't find the answer, it was just seemingly filling the gap for me. And that was throwing me off off the ledge pretty pretty often. So my solution, very naive, was that, you know what? These are the code snippets I keep going back to, Twilio, AssemblyAI, OpenAI, Clot.
Why don't I just build my own collection of snippets? I'm gonna point my LLM cursor to these snippets, and it'll always know what kind of code I normally write. So I did that. I built this thing called guides.curiousments.com, which is a personal domain where I write about technical stuff. And the idea was that I will have all of these most most frequently visited APIs for me, and all it's gonna do is give me the code.
And this would be the premise was I wish all API docs actually did this. All I'm looking for is this first hello world that gets me to this specific experiment that I'm running. I don't need to dive deeper into this. And it worked. It worked really well, but I realized that developers are not gonna do what I just did.
Developers are not starting with the docs anymore. They're not googling. They're not looking at Stack Overflow. They're not looking for definitely not looking for a DAT tutorial on your blog. That's not what they're searching for.
And there's a new generation of builders of Ithecoders that's been unlocked. So are they starting here? Well, one of these or the agent or tool of the week. This could be Cloud Code. This could be Cursor, Vinsurf, or it could be any other tool that is getting released almost on a daily basis.
So this is a new reality. I ask, I get the answer, and I move on. And I never visit the docs. And honestly, I don't ever wanna visit the docs because the idea is not to read the docs. The idea is to get me from zero to one for the thing that I'm trying to do.
So we need to rally around how we will help developers now, now that they have this agent thing coming and getting them the information. Because I think I I forget. There was a talk yesterday which said I think it was Greg said this that you wanna save developers time. Time is the most precious commodity, and LLMs are actually doing that. So we need to help the LLMs get that even faster.
So we were here. Someone was trying. Someone was needing help. Now they don't. They just get an answer.
They move on. But there are some things that haven't changed. What's good for developers is good for your company. That was true. That is true.
That'll always be true. But the second point is somewhat new. What's good for LLMs is now good for developers too. Because again, you they that's what they're using. You wanna be where developers are.
And this is this is where I I feel like we need to double down on is that the docs are still the source of truth. All these LLMs are getting trained on the docs, on the stuff that we are writing. So, yes, the developers are not coming to them directly, but they are consuming them indirectly. So that they need to be accurate, they need to be up to date, and that responsibility falls on developer relations as as a as a as a discipline. So as I was thinking about this, what I was doing that one year, I realized that there there are three things that developers are generally going through.
One is that I'm experimenting with something new. I don't know what I don't know. So I just went in and I searched like, oh, okay. So how do I go about deploying an app? And I don't even know what platforms are available.
So at that point, is your platform showing up? And then once I have that, then I'm I'm tinkering. And that tinkering can be learning. It could be a bit of building. It could be going back and forth with that.
But that's generally the the journey, and that is still the same. It starts off with the LLM. It ends at the product. It was initially starting off probably at Google or Stack Overflow or whatever, but it was still ending at your product. So the discovery is what tool should I use?
Learning is how do I use this thing? And then finally, okay, how do I wire this up into my code now? And there's a lot of overlap in learning and building. And this is way beyond my expertise. I'm by no means an SEO expert.
There are probably people in the audience who are more pros at at SEO. But my sense is SEO as we knew it is fading away. LLMs give direct answers, and you may not even get a visit. So what does that mean in this new world? And as I was researching, I found that this AEO, GEO, I don't know, XEO.
I don't know what it is. But that's the the premise is how do you optimize the LLMs so they always know the kind of answer the developer should be should be given. This could be answer engine optimization, generative engine optimization, but all of that basically falls into the product discovery parts. And and I have a I have a link to this, but this is from a Reddit thread that I found, which I found thought was super helpful was that there are four signals that you can track. And this is probably true for SEO also, but it also applies to AEO, GEO, etcetera.
So let let's look at those. The first one is the brand mention frequency. Is the LLM even saying your name? So you search for something. Did your brand show up?
So I am searching here for how can I deploy a Flask app? I asked a very specific question, Flask, Python. It figured out that DigitalOcean is an app platform. I should probably recommend that. Now, I'll preface this full disclosure.
I work for DigitalOcean. It probably knows more about my DigitalOcean connection, so it prompted me a little bit more. But it's a good example. Is your brand even showing up in these? The second one is the share of voice compared to your competitors.
So how often are you mentioned compared to the competitors? So here it is. It mentioned DigitalOcean, but it also mentioned a few other options where you can deploy those apps. And then the third one is, is your docs actually being linked? So I looked up, and at the bottom, it actually mentions two resources from our blog.
And the first one is using OpenAI SDK with DigitalOcean Inference API, and the second one is Gradient AI. So this is a good example of that it actually gave the information, but it also linked back to you. So now you know that, okay, I'm at least getting mentioned. And then the last one is, well, did you get any clicks from there? So in this case, I obviously clicked on it.
And this is a blog post that we just wrote a few weeks ago about serverless inference and how you can use OpenAI SDK with the new gradient AI platform that DigitalOcean launched. So these are the four things that you can you can measure. And I'll mention these two things. The way to improve the LLM knowing about you and all across those four signals is that, one, this is a duh thing, but create docs that are actually worth citing, and we'll talk more about that. But the second one is publishing llms dot txt and llms full dot txt.
These are text files. You can think of this, like, as your schema, sitemap, XML, but except they're they're for LLMs. So when the LLMs come crawling, they read this and they have a lot more information how your dot content is structured, and they can read that well. Now, this is very, very early days for all of this stuff. Use them as your baselines.
I would not advise building strategies around the data that you find because this is absolutely wild west of 2002 SEO, where nobody knows what is gonna happen. So there's a lot of baseline stuff that you should be doing right now. So you can as these tools get better, you have some baselines that you can actually compare to. I there there are a few. Try profound, and .com, and evertune.ai are some of the ones I have heard of.
But then this is that Reddit thread that I I found very helpful. If you just search for that thing, you you should be able to find it. It lays out all of the big players. And by big players, I mean people who have been funded by the VC VCs. So it's not perfect, but it's a start.
The numbers are very fuzzy. The sources are incomplete. It's a sample size that you're doing. There's a lot of assumptions and a lot of, tweaking you may have to do. So that was discovery.
And then let's talk about the learning and the building parts. Now, this is where the docs are still the source of truth. They're more important now, in fact, because the LLMs are relying on them. And this is something I've realized that humans have what I call a instinct. When we see something that doesn't sound right, we we ask, We question it.
We let me cross check it. Let me go let me actually go Google this. Let me actually go to Stack Overflow. Let me find a GitHub repo for that. LLMs don't do that.
They, in fact, just keep going. They'll fill in the blanks. Sometimes they'll be right, sometimes they'll be wrong, but 100% of the time, they will do this with complete confidence. So I'm sure people in this room have been thrown off by LLMs who would just be like, oh, yeah. Absolutely.
I'll give you an example. We were working with a community member who was helping us write an article on MCP. This is like four or five months ago. And they were obviously taking help from the AI to write some of this stuff or to research some of stuff. And the the article came out saying that, so MCP is this, you can do this, and of course, you can deploy it on DigitalOcean.
This came to me. I was reading through this, and I was like, that's not true. We we we don't we don't support MCP. We do now. Full update, we do now.
You can you can deploy remote MCV servers on DigitalOcean, but this wasn't true five months ago. So this this person was obviously just searching through and was led misled by Chad GPT or Claude or whatever it was into believing that this thing was true, and it absolutely laid out, go click here, click here, click here, push, deploy. Congratulations, you have an MCP server. So there's a lot of false information out there as well. So how can we improve this?
But before we do that, let's talk about how LLMs are actually using your docs. It helps to understand what is happening under the under the hood. So this is basically and I'm oversimplifying this. It's three steps. The first one is ingestion.
This is where your the LLMs are taking your doc documentation page, and they split it. So you have an h one tag, you have h twos, and you have bullet points, you have paragraphs, and all of that. And it's just splitting it. And it's splitting the splitting can happen at any point. We need to help chunk that together.
So if if it chunks it at the wrong place, it's just lost context of what was before that. So the second part is retrieval, finding those relevant chunks of information that's been that's been taken in by the LLM. And then finally, you generate an answer using those chunks. So let me let me give you some examples. So the first one, it says app platform deploys your app on every commit.
Imagine that's that's one paragraph. And the second paragraph is be sure to use production safe environment variables. Now imagine if the chunking actually happens after the first paragraph. It's completely lost track of the second part that it's actually relevant to the first part. So include, write it from a point of view that it's actually completely self contained section.
And I'll talk about a few other examples here. But the it's really important to think of chunking when you are writing documentation. You we can't just command k or chatbot out of bad docs, if that makes sense. The second one is there could be dependency. LLMs don't remember what came before that.
So instead of saying, turn it on the settings, or as said earlier, refer to that section, LLMs don't get that. It's very easy for them to con get confused. So include any dependence you may have. Spell it out in that section. And by the way, if you've noticed, this is true even for developers.
A human developer reading this would 100% prefer the second one over the first one. I have no idea what it what it's talking about in the first one. Spell it out as much as you can. Don't be vague. Name the product.
Instead of saying edit the value to avoid time outs, which mention the app platform, mention the exact feature that we are talking about, spell it out. And this happens again, as I'm reading this, I realize that this is we we do this for human developers. But again, human developers are a bit more forgiving. I'm I can't can't believe I'm actually saying this. Developers are forgiving.
But they are relatively forgiving, they will go and cross check. But elements will not do that. So do not assume any knowledge about any of your features. And this is something I'm not sure how much of this is actually true. I I think it makes sense, logical sense, that JavaScript animations where you have you're clicking on something and it just pops out.
It's like a complex animation for LLMs to to grasp. Maybe avoid those. Again, I don't know, but it just makes logical sense that LLMs would struggle with some of that. And the the one thing that stood out for me throughout this this was that a lot of this is just accessibility principles. We've talked about this for twenty years, thirty years now.
If your images if you're including images, make sure you have alt text for them. LLMs can read alt text. If you have tables, don't make them too complex because it's just hard to read from an accessibility point point of view also. So a lot of this is just going back to that accessibility thing also. So I'm gonna leave you with a few things that I think are super helpful.
Like I said, write self contained sections. Each section should make sense on its own. So one one good experiment for this might be pick a pick a section and just try to read it with no context around it. Is it linking to the proper places? It doesn't even make sense in that one paragraph or not.
So self contained is super important. This is something that all of us can hopefully associate with. When we run into an error, we are taking that exact error and dumping into the chat GPDs or the cursors of the word. Make it easy for the LLMs to do that matching. A lot of this is just matching.
Like, oh, that makes sense. I have that chunk that matches exactly this this so it it makes it a lot easier. Same thing with troubleshooting FAQ. Answer the questions people are actually asking. And this could be taking some of the Stack Overflow questions that you have.
Maybe people have given you feedback. But create that FAQ section for almost every product, if not feature, and include that in your docs. And this is where the the initial naive experiment I had was germinated because I didn't find small self contained code snippets that I could just pick up and be like, okay, this is exactly what I want for this hello world sort of situation that I have going because I'm playing around it for the first time. And the other thing to remember again is that there's a new generation of developers slash builders who are coming to your APIs. They will have no context.
So make it easier for them to grab just that snippet of code. What Ricky showed this morning in his demo of the Vibe coder. Like, he helped a gentleman take one one piece of code and dump it into a Cloudflare worker, and that was an moment. So where is that code snippet coming from for your API if LLMs don't discover that? Oh, and don't use PDFs.
It's 2025. Don't use them in your docs, but also don't use them or to train your agents. It's gonna have a really hard time reading these complex PDFs. Hopefully, will get better. But as of now, they they don't do very well with PDFs.
So I'll leave you with two things that you can do today to improve the learning and building. And the first one is adding an LLM powered chatbot. So once you've done everything else, your docs are in good shape, you should absolutely consider adding an LLM powered chatbot. And second one is adding a copy of markdown button. Will show you examples of some of this.
So this is a DigitalOcean provides a an agent platform under Gradient AI. And what you can do is you can go to DigitalOcean and you can create a knowledge base. You can upload your documentation or whatever resources you wanna upload. It creates a vector database. You don't need to do any of that.
And then it pops out a chatbot JavaScript that you can just copy paste into your existing portal, and that'll have all of that information available on your website. So you can see there's that ask docs thing that we have on our DigitalOcean product docs. And the second thing, like I mentioned, is adding a copy to markdown button. Now this is a fail safe, and I think this is an interim thing that we should all do right now till we all get to a point that we we have improved our docs, LLMs are trusting their docs, and it's been completely ingested. But in the meantime, things that I often do a lot is I go to the documentation page, and I'm trying to copy this thing.
And I don't wanna copy the HTML because, one, it's crappy, and two, there's tokens. We are all spending tokens on this random HTML that I'm gonna send to the LLM. So OpenAI does this really well, I think. Throughout their documentation, they have copy snippets button, but they also have a copy to markdown or copy I think they call it copy page. But that's essentially if you click on that button, it's gonna give you the markdown that I can paste into my agent or LLM and be on my way.
So where does that leave DevRel? I think it's still important. It's still very impactful. It's still building the bridges that we want. It's just doing it in a slightly different way.
And that's important because we still help developers succeed, and that I don't think is changing. And hopefully, when we meet again next year, that will still be true. Thank you. My name is Amit, and