PowerBI.tips

OpenClaw, ClawdBot, MoltBot: Our Thoughts on AI Agents

January 31, 2026 By Mike Carlo
OpenClaw, ClawdBot, MoltBot: Our Thoughts on AI Agents

Armando and I just had one of those conversations where your brain explodes a little bit. We sat down to talk about this project that keeps changing names (ClawdBot, MoltBot, OpenClaw… it’s literally the same project, I’m so confused!) and ended up going deep on AI agents, the future of programming, and why I think we’ve hit the last programming language we’ll ever need.

Watch the full conversation here:

What Even Is OpenClaw?

So here’s the deal. Armando introduced me to this project that basically wraps your favorite LLM (in our case, Anthropic’s Opus 4.5) in a Telegram, WhatsApp, Slack, or Google Chat interface. You can talk to it from anywhere, your phone, your laptop, wherever you are.

But here’s where it gets insane: it’s not running the chat version you use in the browser. It’s running the terminal CLI version. That means it has access to the file system. It can create files, download apps, generate apps, test apps, run whole environments. And you’re just talking to it via Telegram!

Treat Your AI Like an Employee

This was my big epiphany. When you give a large language model full control of a machine, you’re not just running an agent on your computer. You’re basically hiring a new employee.

Think about it. If I hired a new employee, I would not let them see my main machine. I would not let them access it unless on very specific things. I wouldn’t trust them for that. Same thing here! That’s why people are buying Mac Minis and old laptops specifically for their AI agents. They’re treating them like employees with their own identity, their own credentials, their own access.

And get this: OpenClaw has a file called soul.md that stores the agent’s personality. It literally has a soul file! The agent keeps its personality across sessions, but can also context switch between projects. It knows what Project A is about, what Project B is about. It even builds its own skills and stores them for later.

Skills: Programming Without Writing Code

Here’s where my mind really started blowing up. I was working on a Power BI semantic model using an MCP server. I needed to document 190 columns with descriptions. Instead of doing it manually, I:

  1. Told the agent to read a web page with schema documentation
  2. Match up the tables to my semantic model
  3. Summarize the columns and build descriptions
  4. Do it for one table first

Then I said: “Make a skill that does this.” The agent documented its own process, stored it away, and then I cleared the context and said “Use that skill for every table in the model.”

It ripped through 190 columns in minutes. I even told it “I don’t like your ID column descriptions, make them better and show what tables they link to.” And it just… did it.

I saved myself DAYS of work. In two hours, I had documented this model to a very high degree. All I did was program it with talking to the model. That’s it. That’s the new programming.

Natural Language: The Last Programming Language

Here’s my prediction, and Armando and I have been making predictions that keep showing up in our newsfeeds within weeks:

We’ve hit the last programming language you’ll ever need. And it’s your native tongue.

Think about it. Every programming language since the beginning of computers was created to help us talk to machines. JavaScript, Python, C++, assembly… they’re all just abstraction layers. Ways to bridge the gap between what humans understand and what computers need.

But now? The LLM understands natural language AND code. You tell it what you want, clearly, and it converts that into very good code. Not messy code. Good code that follows best practices.

Here’s the wild part: I think we’re going to see programming languages that humans cannot read. Why? Because all those abstraction layers exist to help humans. If the LLM can talk directly to the lowest level, you could have kilobyte-level apps with zero abstraction. Super efficient. The compiler is now the large language model.

Time and Money Just Collapsed

Two arguments I’ve had my entire career: “If I had enough time” and “If I had enough money” I could build a perfect product.

Agents have collapsed both of those at the same time.

What took days now takes minutes. And I’m spending what, $200 a month on a Claude Max subscription? A lawyer bills $450 to $1,500 per hour. My AI subscription pays for itself in the first hour, then I have it the rest of the month.

There isn’t a single programming language I know better than any large language model. They’re 100% better than I am at everything. So what do I have to do to stay competitive? Learn how to leverage the tools. Be a problem solver. Move away from doing the task and into orchestrating and building solutions.

Try It Yourself

Don’t go buy a Mac Mini. Seriously. If you want to experiment, grab that old laptop sitting there collecting dust. That 2012 MacBook Pro? Perfect. An old Razer? Great. Install it there.

As long as you don’t give it access to anything important, there’s no real risk. Treat it like an employee. Give it its own credentials. Its own identity. Let it do its thing.

This is such an exciting future. We’re literally watching the way software gets built change in real time.

All Shorts from This Series

We cut up some key moments into shorts. Check them out:

  1. LLMs: The Future of App Building
  2. Is Natural Language the Last Programming Language?
  3. Agents Designing Languages: The Future of Programming
  4. Distribute New Skills Within Your Team!
  5. AI Agents Slash Project Costs
  6. AI Subscriptions: Save Money vs. Pro Fees
  7. Control Projects via Telegram Commands
  8. Unlock AI Agent Secrets: Discover Claude Skills
  9. AI Saves Time: Code Faster!
  10. OpenClaw’s Soul File: Restricted Access Employee
  11. LLMs Aim to Know Everything!
  12. Future of Programming: New Challenges Arise
  13. AI Evolves: Natural Language and Code Explained
  14. Clawdbot Evolves: Power BI Automation
  15. Abstraction Layers Removed?
  16. Adapt to AI by Problem-Solving

Full Transcript

Want the complete conversation? Here’s the full transcript with timestamps:

0:00 Welcome to our live stream. We’re going to try this out here. Just trying a new format. No intro video yet. We don’t have any kind of intro song. We’re trying a portrait mode, uh landscape here a little bit.

0:20 For those of you who are following me on PowerBI tips, I want to introduce you to Armando. He and I are doing some side projects around data and moving data around. We are here to discuss a little about this clawbot, maltbot, open clawbot, like what all the bots are. It’s literally the same project. I’m so confused.

0:44 Armando, I got to say publicly, thank you so much for introducing me to this whole environment. Maybe give a little bit of background. Why are we talking about this now? And what project did we start noodling on here? We just hit an unlock and we want to share it.

1:10 It’s the whole like LLMs come into the picture. Now we want to use them for coding. Now we want them to do a little bit more. And then it’s like but every session is its own thing and sometimes I want it to do more things and just keep talking to it and have it remember things.

1:35 So then comes along this project called Clawbot. Then it changed name to Moldbot and now it’s called OpenClaw since last night. They basically wrap your favorite flavor, in this case Anthropic’s Opus 4.5, in like a Telegram or WhatsApp or Slack, Google chat, whatever interface.

2:00 What really makes it exciting is that it’s actually running not the chat version that you use in the app but the terminal CLI version. For those who are not very technical, that means like Claude Code or OpenAI’s Codex. It has access to the file system and it can create files, download apps, generate apps, test apps, run whole environments. And you are just talking to it via Telegram. This is insane.

2:45 I literally implemented last night. I had some work I had to get done. This is for the PowerBI people on the channel. This is a semantic model I was playing with. I’m using an MCP server for PowerBI desktop. I can MCP my way into the server and command it to do different things. Hey, make a measure, add some folders, any column that is a number, hide it and add a measure that does a sum on top of that column.

3:20 I’ve given a large language model full control of a machine. You’ll see a lot of people right now going out and buying a Mac Mini. The reason they’re doing this is because they don’t want it to be on their main machine and they want to treat it like a different employee.

3:45 I had an epiphany. This is more like an employee than it is like an agent on your computer. If I hired a new employee, I would not let them see my machine. I would not let them access it unless on very specific things. In the same way, I would set them up a user identity.

4:15 It’s making its own instructions. It’s saving its own passwords. It’s behaving just like an employee. And because the memory of the agent is limited, all it’s doing is writing a whole bunch of markdown files and referencing them in a folder structure.

4:45 When you’re using Claude Code, it can write into a claude.md file what that project is about, what that repository is about, and just keep track of what it’s doing so you can interact with it across sessions.

5:15 OpenClaw has its own separate files that it keeps sort of long term. It has like the soul file for personality. And that one stays. It’s called soul.md! And then the other files might be more project based.

5:45 It keeps tabs on all those skills. I know how to MCP into Supabase, I know how to MCP into N8N, I know how to talk to Fabric. All those don’t have to be in the full context of a huge conversation. It can actually clear context with the LLM itself but have its own context that it loads in each conversation.

6:30 I have documentation on a website HTML page and it describes column names. And then I have a PowerBI report that is the same tables, but there’s no documentation. So I use the PowerBI MCP, get the HTML page, suck down all the information. Read this code. Find me the table name that matches. Match up the tables, summarize the columns and column descriptions.

7:15 I told the agent: make a skill that does this. I saved the skill into my little folder. Clear the context window, start it over again from ground zero. Go use the skill that you just made for yourself and do this for every table in the semantic model.

7:45 It ripped through 190 columns making its own descriptions. I even said, I don’t really like the descriptions you made for the ID columns because it’s a key and surrogate key. Make a better description and actually if there’s any IDs, what tables is it used in? It just goes out and does it.

8:15 I just wrote programming code. I just wrote a function, just called a skill, but I didn’t do any of it. I just literally worked with it on the process, figured it out, told it to document its own process, store it away, and then just reuse its own process. I saved myself days of work. In two hours, I had documented this model to a very high degree.

9:00 Programming is just… JavaScript, C++, C, assembly, whatever programming language exists since the beginning of computers, those were created such that we could talk with the computer. Now we’re in a point where it understands us, natural language, and it understands code.

9:30 If you tell it clearly what you want it to do, it will just convert that into very very good code. It doesn’t make messy code. It makes very good code. It just will create whatever code it understood you to want.

10:00 I found a whole Claude skills repo of here’s every single different language, JavaScript, React, all the different languages. Here’s the skills an agent would need to know to use all best practices. This is exactly what architects should be doing.

10:30 You can give it the instructions and it understands how to read the code at volume. Within minutes you’re getting a code review as opposed to hours of someone else actually physically looking at it.

11:00 How do I take that knowledge and better distribute that to the rest of my team? That’s going to be super impactful. I’m able to work with an agent, build patterns, document them. The trick is how do I distribute the skills we’ve developed amongst the team.

11:30 Here’s my next prediction. Someone is going to invent or we’re going to build languages that humans cannot read in programming.

12:00 I don’t get a JSON file and tell the computer put it on this registry on my disc. That’s already been abstracted so far away. I can’t write machine level code. The only reason frameworks exist is to help humans understand loops and structures and objects.

12:30 Let’s just say hey agent, you need to build your own programming language. We don’t care to read what it is. I’m going to tell you to build things, websites, whatever. You design what you feel like you need, and I’m just going to tell you what I want, and you’re going to figure it out.

13:00 We’ve hit the last programming language you’ll ever need. We’re done. And the programming language is your native tongue, or English, or Portuguese, whatever language you’re speaking. That is the end game.

13:30 We’ve landed at the final programming language that we need. We’re going to remove all abstracted layers of code and programming. It’s going to be gone in like a year, two years, five years. We’re not far away from this.

14:00 Why would two agents think that human languages are the best way to communicate amongst themselves? They’ll just communicate in whatever they need and it’ll be a lot less to transfer over the wire which will make it a lot faster.

14:30 Languages started like assembly, then you get to C, then you have operating systems, then virtual machines like Java, then TypeScript running in the browser. We went so high up to make it easy to program. Now it’s going to start going down again.

15:00 The LLM will start to talk in the lowest possible abstraction layer. The apps will be much more efficient, the memory use will be much more efficient. You remove the gigs of trash underneath the app. You’re going to build kilobyte level apps because it’s all written in the lowest language. Zero abstraction anywhere.

15:30 There was this recording studio that had this weird software built in assembly. This whole DAW was built in assembly. It could run on a very low power computer. You could turn on plugins like crazy. The whole DAW was like 2 megs. Because it was written in assembly, but nobody takes the time to do it.

16:00 We just talk to LLMs just like you and I. They build the stuff, but they build it much better. Data centers, much less usage. We’ve hit a massive unlock on hyper efficiency on app building. The compiler now is large language models.

16:30 In the next couple of months someone is going to produce another earthshaking programming language that has nothing to do with people. Everyone is going to focus on building these abstracted layers down to machine code.

17:00 I don’t know how you’ll test it. I don’t know how you’ll know if it’s right. I don’t know if you’ll know if it’s like mining Bitcoin at the same time. There’s a whole ethical problem that we’re going to eventually run into here.

17:30 We’re not so far from quantum computing at which point all goes out the window. It’s just crazy stuff you don’t know what’s happening. As long as it works, that’s as far as you go.

18:00 We may have already hit the moment in time where you’re no longer going to need to write BIOS for quantum computing. You’re just going to tell the parameters to the agent and the agents or team of agents will figure it out.

18:30 Two main arguments I had almost my entire career: if I had enough time and if I had enough money I could build a perfect product. Agents have collapsed both of those at the same time. What took days now takes minutes.

19:00 As soon as you reduce time, you immediately reduce spend. I’m spending $200 a month on a Claude Max subscription. A lawyer bills $450 to $1,500 per hour. One Claude subscription pays for like 15 minutes of their time and you can use it the entire month.

19:30 The agents and large language models are trying to encompass all of human knowledge into a computer. There isn’t a single programming language that I know better than any large language model. They’re 100% better than I am at everything.

20:00 I can’t bring anything to the table. So what I have to do is learn how to leverage the tools. I’ve got to be a problem solver. I’ve got to move away from doing the task and move into orchestrating and building the solutions.

20:30 Very exciting future. There’s a lot of things coming. We hope you like this format. Please let us know in the comments.

21:00 If anyone’s trying to decipher whether they should use something like this, just try it. Don’t go buy a Mac Mini. Grab that 2012 MacBook Pro sitting there collecting dust. That’s where I installed it.

21:30 As long as you don’t give it access to any important stuff, there’s no real risk. Treat it like an employee. Give it its own credentials. Its own identity, just like everything else.

22:00 There’s a lot of people already starting to talk about the security risks of all this. Treat it like an employee. Just give it its own credentials when you make things.

22:30 Thank you so much, Armando. We’ll see you next time. Thank you for this very short episode. Cheers!

Previous

Remote Fabric Jupyter & Local VS Code – Ep. 497

More Posts