Recorded March 27, 2025
(For a summary, click here.)
John Eischeid: Let’s just start off and just give me a brief introduction about yourself. You started your company in 1999. Just tell me about yourself, inLinks, and what it does, briefly.
Dixon Jones: So, I’m DJ. I have been doing this SEO stuff for – since 1999. A couple years earlier, I set up a business doing murder mystery evenings and kind of learned my SEO, with the murder mystery website, which I was doing. And I ran an agency for about a decade and then was Marketing Director of Majestic.com, which is an internal backlink search engine, which some of you may know. And now I am CEO of a company called inLinks and we’ve just launched a new product as well called Waikay, which I’ll talk about if we get a chance. inLinks is – as a service – software that really thinks about the world in terms of well-defined entities, topics that are defined. So it’ll break up any content into specific concepts. So it works very well in modern search, because it starts to get you to create content that is based around concepts, not just keywords. So you’re no longer just trying to rank for “want to buy a house,” without thinking about whether the buyer or the person searching is a couple or a first time buyer or a buy-to-let person. So thinking about your audience – thinking about the concepts – is better than thinking about just the keywords, really.
JE: Okay. So historically, why has this been important for businesses? And touch on how that might be changing. We’ll go into a little bit more depth in a bit.
DJ: So when Google started out, it used the page rank algorithm and it used a concept called engrams. So engrams really went through the whole of the internet and saw let’s say “tower bridge” – let’s say “Golden Gate Bridge.” Everybody on your side of the pond is going to know about Golden Gate Bridge. The concept of a “bridge” on its own is one thing. The idea of a “gate bridge” is a bit weird. “Golden Gate Bridge” – that’s very well known, so then it goes through and counted the number of times that two, three, four, five words were all used in the same order and it recorded those in a database – and this is genuinely how Google started out – so it could see that Golden Gate Bridge was a very common set of words that went together and so it understood it as a concept. And so if you wanted to optimize for “Golden Gate Bridge,” then you used the phrase Golden Gate Bridge. And so the whole of the maths of page rank and Google those days was based around these kind of ideas of engrams combined with the backlink profile algorithm of pagerank. But in about 2014 – I think I got the year right – they bought a company called Meta Webb which had a thing called Freebase. And Freebase was a database of concepts. And it’s a bit like Wikipedia is a database of concepts. But Freebase was a little bit more structured in that it wasn’t so much about describing it in lots of words and things. It was, more “Eiffel Tower,” is this ID number and it’s connected to these ideas and it was connecting the world of information or the world’s information and instead of doing it by words and stuff, it was connecting it by physical ideas and that’s what inLinks has done. So inLinks basically built a knowledge graph of the world of all the concepts and is conceptually based around Wikipedia. Think about a Wikipedia article as an entity and you’re not too far from the tree. So we’ve got this sort of picture of all the concepts in the world. So, when we read a piece of content for your customers that you have written or that your competitors have written, then what we’re doing is breaking it down into those individual concepts. So if you’re writing about a bookshelf with a book, three books on a bookshelf, we got the concept of the number three. You got more than that, the concept of the number three, concept of bookshelf, and so you’ve got those three concepts all together. and then we have a natural language processing algorithm that can look through the 3,000 words on the page, or the 30 pages, the 30 few words, or whatever and pick out the concepts that describe relationships between them. So what that does is it reduces the problem of “How do I write about bookcases for books into concepts instead of just words?” So, if we wanted to write about bookcases for books then what it will do is it’ll go and have a look at the best pages on the web for the phrase bookcases for books and read those, understand all the underlying ideas and concepts that those ones are talking about, and use that as a blueprint to try and come up with a plan for new content. It’ll also use the same concept to interlink all of your ideas on your website. So if you let’s say you’re a furniture company and you’ve got a page on bookcases, then you say this is the page about bookcases, it’ll go through and it might see bookcases, but it’ll see bookshelves or it might see synonyms of that same idea and see those as well and say, “Right you’ve talked about these bookshelves over here. We want to link that through to the bookcases page.” So it’ll go through and read your existing content and then you – as a human in the loop – are saying this is the page about this concept and it’ll try and build those links and go through and you can check them and make sure they’re okay. You can edit them if you want to. But that’s the idea. So it’s doing a whole load of things based around treating the world as a collection of objects instead of treating the world as just a bunch of words.
JE: Interesting.
DJ: Is that deep enough?
JE: Yes, very much so. At this point, artificial intelligence models are starting to mine the web for content. But there are two ways. I know there’s the tool in your site where you use SEO and AI to generate new content, but there’s also the reverse, which I also want to explore. I mean, how does SEO affect what an AI retrieves when you’re using a deep research model or something like that? So, it’s a two-way street. And I’m just kind of curious as to how they work together – or not.
DJ: So, the first one, you’re right. Yeah. We’ve got tools to help you create content and so yes we create these are the object these are the entities you want to talk about in this order but then just effectively I mean it looks painless at your end, but on the users end, but basically we can use those to prompt and generate the paragraphs that are very refined. So that works out better than just asking ChatGPT to write a thousand words on bookcases in America or whatever because then it will go a long way from the SEO tree. But if we keep everything – all the prompts to paragraphs – that we’ve defined as “These are the things we want to talk about this paragraph,” you get a much better SEO-optimized piece of content. But you’re right, the other way round is very interesting and it’s going to spawn a whole new industry and I’m really interested in it and excited. And that’s why we’ve come out with this other product which is called Waikay which stands for “What AI Knows About You.” And we kind of made it hard to pronounce – hard to spell, everything – on purpose because the very next thing you got to say is “What? Okay. What’s that?” And then you say it’s “What AI Knows About You,” at which point we’ve also sold the product. So what it is – it’s the first step in what we’re trying to do. Probably all the holy grail is, “I want to buy a bookcase ChatGPT. Where do I go? Let’s go to John’s site at mymarketingpro.com.” [Please note: It’s not my own site, and it doesn’t sell bookcases.] That’s where you want to go. That’s where we’re going to want to come out in the end. We’re a long way from that. But what we need to do first is build technologies that will allow us to see what the LLMs are doing. And so What AI Knows About You is a tool where you can put in your website and it will go and ask the main LLM. So we’re working with Perplexity, Claude, ChatGP,T and two versions of Gemini – the training data and the grounded data. So the grounded data is the one that looks up a search engine when it comes back with the results and the training data does no search lookup and I think Sonar is also grounded as well. So we got two right now. We got some with training data and some with sort of search results and augmented data. And so it’s building up a picture of what AI knows about you, and then allows you to fact check right now what’s happening. So it’ll come back and give you a score and we’ve actually put in a patent – just literally before I got on the phone – got the paper application number back from the lawyers. So, obviously we had already put it in before we launched it. But yeah we sort of got this score now of saying rght, Claude, we’re giving you a score of 76 as to what Claude knows about MyMarketingPro or inLinks or whatever the site is. But then you can also do it by topic as well. So inLinks is all about natural language processing. So what does ChatGPT know about inLinks in the context of natural language processing? So now we have a report that says, “Right in that context these are the facts and things that are coming out that we know about in inLinks in the context of natural language processing. Here’s all the facts. We’ve got the raw results if you need to check that as well, but here are the facts.” And they can say, “Should we flag. Do I need to go and tell my client that there’s some terrible things going on? The LLM’s got it all wrong. We are finding that LLMs are doing a lot of things wrong at the moment.“ Just flagging it is what we’ve got at the moment. But what we’re also doing is checking all the citations, and we’re finding a lot of the links that the LLMs put out – some of them are dead. Saw one to ikea.com/chairs the other day, and they don’t have a URL at /chairs. The LLMs appear to have seen a pattern and then just decided to say /chairs. So now the LLM has come out with “Oh yeah, IKEA.com is doing this” and said all these things and then cited a URL that doesn’t exist. It would appear those are easy things to fix if you got an LLM citing URLs that are on your website, or not on your website, you can just throw one that URL to a place that the human can see and that’s going to be great. You can also see your competitors as well. So what we’re doing is looking at your site always in connection with a couple of competitors and then we’re using that to work out topics that your competitors are known for in a context. So using the inLinks in the context of natural language processing, it’ll look at I don’t know what we’re using as the two competitors in Wordlift and who’s another NLP company? Market Muse is another one. So we can look at that and wait and say Market Muse is talking about I don’t know articles or schema and things like and you’re not. So it can start to point out things that they are getting reported about in LLMs that you’re not.
JE: Yes. So if you’re a company and you put your name in there and said “within the context of this,” this tool is a way to check to see if LLMs are hallucinating about you regarding that topic.
DJ: Yes, and that’s what the fact-checking is for. So literally it’s a case of we’ll break all these answers down and what facts are they actually saying in these? Let’s put those all in lists. Then you as a human can go down and say yeah, that’s fine. That’s fine. That’s wrong. Let’s flag that. We can decide why it’s wrong later, but we just need to flag it so it’s there and then go through because – the why –we haven’t quite got there yet. Because this is where I think there’s going to be a rich future for search engine optimizers, really, because knowing that an LLM is getting something wrong, and one thing is driving down, and finding out why it might be. That you’ve just put something on your own website, that is an easy fix. It’s much more likely that it’s something on a third party website that said something about you that you need to track down. The LLM may not have said it in the same way. So, you can’t necessarily just type the fact into the search engine and find it. You may have to investigate a little bit more. And fortunately we kind of are citing all the pages that the LLMs are reporting when they’re giving their answers. So you can go and have a look at those and maybe find the answers there. So there’s a bunch of reasons why it might be doing it, but we have certainly had success. So we had for inLinks – for example – when we were testing the tool, it came back and said – I don’t know ChatGPT or one of them – said inLinks is a rank-checking service. We’re not a rank-checking service. We do a little bit of Google lookups and things using the API, but we’re not rank-checking. We’re not like Semrush or HFS that check these things at scale. So, eventually we found a review that on a site that we hadn’t even noticed really before that had said this when reviewing our product. Maybe they’re an affiliate, I don’t know. We reached out to them and it’s a lot better than asking for a link. It just said, “You’ve done this. There’s a fake fact there.” They came back, “That’s fine. We fixed that.” Fantastic. That was great to see. So we fixed a problem for users and hopefully we’ve got rid of a hallucination for an LLM.
JE: So, it’s a very good way to find errors in other copy that people have generated about you and have that fixed. I’m thinking that another avenue in which this information could be used is that in terms of reputation management, simply putting out something akin to a press release along the lines of “This model is saying this about us and it’s simply not true.”
DJ: We think that that’s a really important potential market for us. I mean, I suppose the halo of my sort of personal sort of audience is in the SEO world. But I absolutely agree with you that’s a really useful angle. I think the legal profession as well is also interesting because there will be people that are slandering you, and they’re saying something that potentially is – well in this day and age potentially – very wrong, and certainly in this country, you can sue somebody for defamation at that point, or at least you have the right to. You can email them and say cease and desist. In other words, “You’re saying something wrong about me. It’s being interpreted this way. Cease and desist.” And they’ve got to take it down.
JE: Right. And crucial in that point, you have to have the links to the sources so you know who to contact.
DJ: That – and we haven’t got all the answers yet. But what we’ve done with Waikay – and it’s developing quickly now – is create the tools to see into the LLM’s world. And those tools really weren’t available, a few months ago. Of course, before chat GPT came along, there was nothing to build tools on and that’s only changed between two years.
JE: Before that, nobody needed it.
DJ: So, it’s happening very very fast and I think it’s happening in a parallel world. Are the LLMs allowed to take all this data in the first place and make a mess of it and that’s another side of things. But for me as a tool, as technology that’s trying to help brands in a marketing environment, we need to give them the tools to be able to make decisions to do actionable things and, and so you can’t wait for the courts to decide whether ChatGPT or Gemini is good or bad. There’s already millions and millions of people using it and so at the very least you want those LLMs to be accurate in what they say and portray just like you want human beings to be accurate. You don’t want a human being – if you’re an airline that is continually late then you don’t really want them all saying that. But if you’re an airline that’s always punctual and there’s a bunch of people saying it’s continually late, that’s much worse. If the humans continually say something that’s a lie, sooner or later a group of people will believe it. And LLMs, if they continually churn out something that’s incorrect, sooner or later someone’s going to believe it. So you want to manage that in exactly the same way that you want to manage public perception really of a product.
JE: And I think from a legal standpoint. If you’re trying to – there’s been a lot of discussion over who’s responsible when an LLM or an AI hallucinates. And in addition to some of the deeper research models that provide links, this is also another tool that people could use to track down where those hallucinations come from.
DJ: Yes.
JE: Or even if it’s just not a hallucination, it’s just incorrect information or potentially slanderous information, this will help people track that down. So, it’s additionally a research tool. Right?
DJ: Yes. So that’s one use. So the fact-checking use is getting it right and then the next use is augmenting it. So, “What are our talk competitors talking about in the context of this product? What does the LLM think about these things in the context of this product compared to our brand in the context of this product?” and seeing that that difference starts to give the marketer the ideas of, “Well, in order for me to strengthen the public’s perception or the LLM’s perception but it transfers over to the public customer the perception of this product in this context, I really probably need to talk about these ideas as well because, my competitors, they’re known for these ideas. I’ve not really talked about them. These are the gaps that are going to help the user eventually, whichever channel they go to, get to my product.”
JE: And speaking of competitors, when it comes to your Waikay product, are there any others out there?
DJ: So, we’ve got a few. In fact, there was a Lawrence O’Toole from Authoritas put a blog post up listing around about 12 different tools out there that are trying to crack this problem in different ways. Because it’s a kind of rank checking for LLMs – is kind of the way you might think about it – and you can’t rank check for LLMs because the LLM will churn a different result out every single time. So, you’ve got to think about things differently. But the ones that we’ve come across – we kind of think that we’re doing okay against all the ones that we’ve seen. Ziptie.dev is out. And we got Profound, which is the other extreme, is thousands and tens in the thousands. It’s one of those ones where it’s so expensive it doesn’t have the price on the front of the website. If you need to ask how much it is, you can’t afford the thing. So those were the ones that we were sitting there looking at and thinking, “Right we got to make sure that we’re better than the free one we got to be cheaper than the expensive one, so we got those ones out there. There are a bunch of other ones but what most of the other ones are doing is – so Semrush has one product which is $99 a month – It’s really focusing on AI overviews in Google search results, and quite a few of the tools are just really trying to concentrate on AI overviews. I think that’s a bit of a mistake. I don’t think that’s getting to the heart of the problem. So we’re trying to do something different by helping to understand what AI knows about you rather than how you’re perceived in AI overviews, we’re coming at the problem from more of a foundational kind of approach or we hope we are. What the AI knows about you in the context of “Choose Product Here” is different from, “Do I rank in AI overviews when somebody types in keyword phrase here?”
JE: So, that’s what sets your service apart from the others.
DJ: That and the fact-checking. I don’t think anyone else got a fact-checking service yet.
JE: Okay, and when we’re talking about AI, that’s obviously a very general term. You’re running it through quite a few models. So it’s basically one query. It sounds like a meta or a multimold modal.
DJ: One report has to do – so you get a brand report for free at the moment. So, you can sign up for a free account on waikay.io/free, and that’ll give you a brand report. But the really interesting stuff comes in when you start doing topic reports – talking about your products and things. But every single report has five API lookups for yourself and five for each of your competitors. So that’s 15 lookups for every report that we write, which is better than 6,000 lookups, which is what Profound are trying to do, but I don’t think if you did 60,000 you’d really get there. You’ve got to be laser focused. Our approach is a more effective approach, I believe, I hope so anyway. So, yes, that does cost. So yeah, essentially we got a bunch of packages. You can start at $20 a month, but ultimately it’s going to cost about $2 to do every report just the sort of the basics of those reports because of the API lookups and stuff. You’re kind of leveraging all of those tools, but you get a lot of insight from that because we can then leverage all the stuff that we learned about when we built inLinks to come out with some really good recommendations and ideas. And hopefully before, in the next week or two, we’ll hopefully be going from just, “Here’s is what we know,” to “This is what you’re going to do about it.” But we didn’t want to come out with that on launch because we needed to make sure that we were going to come out with recommendations that were valid. I’ve seen the reports today. The draft reports we’re coming out with and I think. “Wow! Wow, that’s great!” We’ve also learned about a little bit about UX since we built in lnLinks as well, because I think the lnLinks UX is a little hard to get through. Not dramatically. It’s not terrible, but we’ve got better. And also, by the way, we’ve got a friend called Claude who’s helped us with that.
JE: Claude helps with a lot of things.
DJ: We wouldn’t trust Claude to write the program in the first place, but we’re quite happy for him to make it prettier.
JE: Yes. Or you can use one model to debug the code from another. I’ve done that.
DJ: Yeah. I’m sure we’re playing with those things. Debugging is a useful thing. Trying to come up with the original ideas – that’s all our own.
JE: Would you say that AI is poised to become the next primary method of search and reputation management?
DJ: I do. Yeah, personally I do. I think that it’s only going one way, and anybody that’s hanging on to the notion that people are going to carry on using traditional search in the same numbers as they were before is just not seeing these ChatGPTs, and the Perplexities, and the Claudes and things – they can feed much better into apps and third party tools. They can also feed into your own website. So you can easily have an LLM large language model that is just trained on your own website data and stuff and you can start asking questions on your own website. These sorts of things are going to make people jump straight to the answer in an LLM. Yes, it might slight your website but the more important thing is that you have to at the end be the product that people want to buy as a result of the end of their journey.
JE: If the AI says something bad about it, people aren’t going to want to buy it. So, this is something people are going to want to monitor.
DJ: But what I think is interesting is that people are thinking, “That means I don’t need to do content anymore. What’s the point of even having a website anymore?” These are all how the LLMs are voracious readers. They just want to read stuff. They want to read, read, read, read, read. And the more they read about your product in the context this, your brand in the context of this product around the web, not just yourself, all of that traditional SEO stuff still applies or a lot of it applies. But it’s just eaten up by the LLMs in different ways and comes out in different ways.
DJ: So, ultimately if you want to fly from London to New York, you’re still going to have to fly on that plane, and the way that the LLM comes out may influence which flight you go on because it will give you a really quick breakdown between, British Airways versus American versus Delta, and you can start to see the pros and cons. And if you’ve got long legs or you’re a large person, you can make those associations and work out the width of the seats or whatever it may be very quickly. So you may come out with different interpretations or different decisions as a result of the LLM, but you’re still going to fly across the Atlantic, so the content is still important. It’s just you may not see why it’s important until it’s too late.