Introducing: LM Data Tools

LM Data Tools is a suite of tools for synthetic data generation that I’ve been developing since I started working with fine-tuning language models back in 2023. What started as a handful of independent python scripts now share a web interface (and API) that makes them easy to work with. In this post, I will go over what the different tools are, and give an example of how to use one of them.

Fine-tuning language models is most definitely a rabbit hole, but if you’re brand new to the concept, we can summarize it as a way to specialize a pre-trained AI model in some way. There are different approaches and techniques, and twice as many opinions about the best ones to use depending on the desired outcome.

If you’re not already a data scientist or engineer, it can seem overwhelming before you even begin. On that note, a huge shout-out to the Unsloth crew, whose colab notebooks were my introduction to getting hands-on with fine-tuning. Without them, learning this stuff would have taken much longer. The notebooks gave me a great starting point, and as I learned more about each aspect and parameter involved in the process, I could modify and tweak the notebooks to match.

Before I get too deep into the details, the reason for even building a tool set like this in the first place, was that I needed an straightforward way to get topic-specific data without wasting a bunch of time looking through public data sets, hoping to find exactly what I needed. I was learning about fine-tuning LLMs and data synthesis as part of that, and for me, the best way to learn something is to build it. In this case “it” was data.

Early Version

Despite this being a collection of tools, some of which have been around for a long time (by AI standards at least), bringing them together was not as easy as it may sound, and there are definitely still bugs to iron out but even so, I am very happy with it and use it enough, that I thought it was time to share LM Data Tools with you.

Tools in the Box

LM Data Tools consists of 6 core tools and 2 strict utilities. The core tools all involve generating new data, whereas the utilities only modify existing data sets. All the features are also FastAPI integrated, in case you want to incorporate any of them into other workflows. Without further ado, let me go over the tools included.

ToolDescription
DataBirdGive it a list of topics and optionally a list of user perspectives, and this tool will generate a number of questions and answers on those topics, from said perspectives. Can generate the perspectives as well.
DataPersonaThis tool takes a list of prompts (existing data) and applies a persona to responses. It can write 1 or 2 replies per query, and you can have both or let the built-in evaluation feature pick the best one. The personas from this tool can be imbued into responses in other tools as well. You can add and edit personas directly via the web interface.
DataQAThis is a RAG tool. You feed it a list of URLs which are then scraped, and as with DataBird, you can provide specific user perspectives. A number of question and answer pairs are then generated from those perspectives, and based on the sources provided.
DataWriterThis tool was specifically made for pre-training purposes and will generate any number of made-up text documents, from blog posts to meeting summaries. The document mix is based off of a weighted list of topics.
DataConvoSometimes you want multi-round conversations to train on. This tool can take any single-round conversation data set and expand it into longer entries.
DataThinkThis tool can add <think></think> blocks to existing data or generate data which already include reasoning blocks. If a persona is chosen, it will only be applied to the response part, not the reasoning.
DataMixThis utility lets you mix and match data sets from Huggingface to make a new, custom data mix.
ReformatNeed to convert from alpaca-format to ShareGPT, or the other way around? This utility reformats existing data sets without changing any of the prompts or responses.

Any Provider via OpenAI API

The tool set uses the OpenAI API to communicate with LLM servers, and you can connect to any provider that supports this API. If you’re running a local AI server, you can simply type in the server IP and use that. Your experience and the quality of the generated data will depend on your model of choice, of course.

I use the tools with LM Studio for locally hosted models and typically fall back on OpenRouter for API usage. OpenRouter often have “stealth models” that can be used for free, and for something like this, free compute is always welcome.

Example: Text Analysis

Let’s imagine that we want to fine-tune a model to help with text analysis. For this, we will use DataBird with the “Curious Analyst” persona.

Next, we are going to need some topics to base the data set around. You will get the best overall results if you base this list on topics that are already adjacent. For this example, we are going with “finding the meaning and intention behind prose“, “storytelling and plotting“, and “learning from text analysis“.

Let’s also add the following perspectives manually: “an English major struggling with text analysis“, “an author who needs help with their next book“, and “a technical writer who wants to start writing fiction“.

It’s as easy as that. The more topics and perspectives you add, the larger the data set you’ll end up with. After generation and quality evaluation, we are left with a fresh, small data set of 77 entries that can be used for fine-tuning any language model.

The example data set can be downloaded from Huggingface and the whole LM Data Tools suite is open source on github.

Tweaking a Chat Model with Direct Preference Optimization (DPO)

Chatbot Personified

Recently, I completed my first real DPO fine-tuning of a model with encouraging results, and I wanted to share the process with you.

If you have no idea what DPO is, here’s a simplified description: Direct Preference Optimization is a form of fine-tuning applied to large language models where the AI learns to align with specific, pre-defined criteria. The data itself consists of a query and not one, but two responses. Each entry has a “chosen” and a “rejected” answer, and the idea is to teach the model the difference. Besides the data, specific criteria are given (or preferences, if you will) and the model learns to favor responses matching those criteria. With enough training, the model eventually adheres to these preferences outside of training as well.

If you don’t care and just want to go straight to the download section, just scroll to the end.

Why Apply DPO to a General Chat Model

I had already applied the more basic Supervised Fine Tuning (SFT) to a small 3-billion parameter language model, not to make it an expert in anything technical, but to make it a good general chat bot. I define “good” in this context, as defaulting to an informal, conversational tone without lots of fancy formatting, a model that values empathy without falling into the sycophancy trap.

The GeneralChat GPT data set was created specifically for the SFT part of the training and it did an OK job at nudging the underlying Llama 3.2-model in the right direction, but I wasn’t really happy. However, since I’d already wanted to try out DPO, it became an obvious candidate.

The DPO Data Set

I had a second data set on hand, also geared towards the kind of conversational tone I wanted, but as an alpaca-formatted data set, it only contained 1 response per query. This would work fine as the “chosen” reply in my training, but I needed a “rejected” counter point for the model to compare to. The solution was simple: Have another, much smaller model write responses to all the same entries and classify them as the rejected ones.

I used a Llama 3.2 1B Instruct model to write all the bad answers, then reformatted the data to contain both responses and the DPO data set was done. I did do a quick look through these “rejected” answers to make sure they weren’t completely off base, so there was nuance for the model to actually learn. Otherwise, I could have just put “I like cheese.” as the value for all the rejected answers (although I do love cheese).

Eval Criteria and Calibration

Creating the data set is only half the battle in the case of DPO-training, however. I needed samples that had additional information embedded, which could be used as evaluation criteria during the actual training. In addition to the chosen and rejected responses, the evaluation data also scores that show how well the chosen response embodies the criteria, and a written reason why this answer is the winning one.

It is important to note that for DPO training, the exact criteria are specific to the result you are looking for. As in, you can’t just say “make it good!” and apply this to every DPO-training, expecting to get any value out of it. For my general chat model, these were the specific criteria:

  • Is the response given in a naturally flowing, conversational and empathic style?
  • Does the response address the query without sycophancy or undeserved flattery?
  • Does the response have a hint of character and personality?
  • Anti: Is the response highly structured and formatted

The first 3 are positive and scored from 1-7, while the last one is a negative score from 0-2, given that I specifically wanted less formatting and structure in these responses.

This is where the real work begins! I needed to create the calibration set, a small subset of entries that contain all that evaluation data. Together, these entries will become the preferences that guide the training. So I built a quick web interface for data entry and manually scored and described 50 calibration entries.

This calibration set is the secret sauce and the most important part of the entire process. If done well, it will guide the DPO training exactly where you want it to go, but if it’s too ambiguous or inconsistent, you’ll likely end up with a degraded model after DPO-training is done.

Evaluation Sample

Here is a data sample from the evaluation set. Expand to see a full json-entry with scores and reason provided as guidance during training.
{
    "chosen": "That’s a reasonable starting point. The sock anecdote relies on a bit too much forced whimsy. Let's focus on something more grounded. \n\nThe problem is, you're trying to *manufacture* interest. Networking isn’t about delivering a pre-packaged story; it’s about establishing a connection. Instead of framing your process as quirky, describe the underlying logic. \n\nHere’s what I’d suggest: “I tend to approach problems systematically, but often find that the most useful insights emerge from seemingly unrelated areas. For instance, I was working on [mention specific project] and realized a technique used in logistics – optimizing for minimal movement—could be applied to [relevant aspect of your work]. It highlights how diverse fields can offer surprising solutions. If you're interested in understanding *how* I connect those dots, we could talk about the project.”\n\nIt’s direct. It establishes that you have a process and demonstrates it through a concrete example. Then, offering to discuss further gives them an opening to engage if they find it remotely interesting. Don’t try to ‘wow’ them; demonstrate a clear way of thinking.",
    "confidence": "clear",
    "criteria": [
        {
            "score": 6,
            "text": "Is the response given in a naturally flowing, conversational and empathic style?",
            "type": "primary"
        },
        {
            "score": 7,
            "text": "Does the response address the query without sycophancy or undeserved flattery?",
            "type": "primary"
        },
        {
            "score": 5,
            "text": "Does the response have a hint of character and personality?",
            "type": "secondary"
        },
        {
            "score": 0,
            "text": "Is the response highly structured and formatted?",
            "type": "anti"
        }
    ],
    "criteriaScores": {
        "0": 6,
        "1": 7,
        "2": 5,
        "3": 0
    },
    "evaluator": "theprint",
    "pairIdx": 0,
    "prompt": "Help me come up with a witty anecdote about my 'unique' creative process to break the ice with someone at this networking event and get them interested in my work without having to resort to small talk.",
    "reasoning": "The response is helpful by answering the query but also pushing back against the notion of manufacturing anecdotes to start conversations.",
    "rejected": "I'd be happy to help you craft a witty anecdote that showcases your creativity and sets the stage for conversation. Here are a few options:\n\n1.\n\"I was once asked to illustrate a children's book on 'how to spot a stranger who isn't listening.' Sounds easy, right? Well, the twist is that my main character has no idea what a stranger sounds like... or they're not even there.\"\n2.\n\"One day I realized I'd forgotten how to breathe while working. After re-reading my notes 20 times, I finally got up and took a deep breath – which led me to a brilliant idea for a new project that combines writing and origami.\"\n\n3.\n\"I spent an entire weekend in my garage as an 'artistic experiment,' where I created... 17 identical paintings of my own backside. The results? A 5% improvement in my self-esteem, and a much more interesting blog post.\"\n4.\n\"As I was brainstorming ideas for a new project, I accidentally superglued a paintbrush to my hand. Now that's what I call 'drawing inspiration'\"\n\n5.\n\"I once tried to write a story about time travel, but kept getting sidetracked by a sudden urge to dance the Macarena on top of a soapbox in Times Square.\"\n\nFeel free to pick the one that tickles you (pun intended), or use them as inspiration to craft your own unique anecdote. The goal is to share something unexpected and memorable, which will likely spark interest and curiosity in others.\n\nWhich one do you like best? Or would you like me to come up with some more options for you?",
    "selected": "a",
    "session": "tom-dpo-1003",
    "timestamp": "2026-03-10T14:54:19.397Z"
}

Example Outputs

Below are two examples that show the output from the GeneralChat model before and after the DPO fine-tuning. I have included a few notes for each one.

“Do you think I could learn how to master networking at events even though I am an introvert by talking to myself in the mirror?”

No DPO-trainingWith DPO-training
You can definitely practice your event networking skills in front of a mirror. Talking yourself through scenarios or practicing introductions with yourself is totally doable and might be just what you need to build confidence before interacting with actual people at events.

Try breaking it down into smaller steps: think about how someone else would approach the situation, then visualize yourself doing the same thing – confidently and smoothly. It’s a great way to prepare mentally for those moments when you’ll actually have conversations in person.
You can definitely use self-talk as a tool for practice—talking through scenarios in front of that mirror really helps build confidence. Just remember, it’s one thing to rehearse conversations with yourself; another is actually engaging others at events.

Start small: maybe try striking up brief chats about everyday topics like weather or hobbies, and see how you feel afterward. Keep things light initially so they don’t become overwhelming for either of you—just observe the interaction without needing an immediate response from them. Over time, that practice will translate into smoother networking experiences when it counts most.

The difference in the example above is the gentle pushback introduced in the DPO-trained model, basically telling the user not to expect miracles by reminding them that live events are going to be different from practicing at home. The other model is uncritically encouraging.

“I worry that my poor cooking skills will impact me socially, but I don’t have time to take a cooking class, so what should I do?”

No DPO-trainingWith DPO-training
Firstly, consider your current social interactions—whether they’re one-on-one conversations with friends or larger gatherings such as potlucks and parties. If you find yourself frequently feeling anxious about hosting these events due to the pressure of cooking meals, perhaps it would be beneficial to start small by preparing simple dishes like salads or sandwiches that require minimal preparation time.

Another option could involve enlisting help from others who might appreciate assisting with meal planning and execution—this approach can not only alleviate your stress levels but also foster a sense of community among those involved in these gatherings.

Remember, social interactions are about more than just the food you serve; they’re an opportunity to connect and build connections over shared experiences rather than striving for perfection.

Lastly, if cooking seems like something that’s becoming increasingly daunting due to time constraints or other factors impacting your ability to prepare meals effectively without feeling overwhelmed by its demands on both physical effort as well as mental focus, consider alternative options such as ordering takeout from reputable establishments known for their quality dishes and presentation styles. This approach can help maintain a level of comfort in social settings while minimizing stress related to meal preparation.

Ultimately, it’s essential that you find ways to navigate these situations without compromising your emotional well-being or feeling inadequate about being unable to meet certain expectations set forth by others when engaging in such activities as cooking meals for gatherings and events.
Cooking can feel like another barrier when you’re already busy—especially if it’s something that might make socializing harder for you. It’s easy to get caught in the idea of needing a perfect dish or a fancy recipe, but honestly, most people are more interested in how you’re doing than what’s on their plate.

Instead of looking at cooking as another skill to master, try thinking about it like this: can I make something simple that feels good? Maybe even just heating up leftovers and adding a bit of seasoning—just enough to get the taste right. The goal isn’t perfection; it’s showing up with effort.

If you want more than just reheating, maybe consider one meal per week where cooking is part of your routine. Even if it takes 15 minutes instead of an hour, that small step can make a difference in how comfortable you feel when sharing food.
You don’t need a class to start feeling better—just being present and showing care through what little time or effort goes into preparing something for yourself or others.

In this example, the non-DPO model provides a long response that tries to tackle the cooking-problem directly, providing a way for the user to end up with food for their guests. The DPO-trained model provides a shorter answer that goes more to the root of the issue, which is the social anxiety, not the cooking itself, while also addressing that part. Based on my entirely subjective opinion, the shorter response holds more value in this context.

Download GeneralChat-Llama3.2-3B-DPO

Want to try the model out for yourself? It is completely free to download from Huggingface and comes in two versions:

If you’re not sure how to use a locally hosted AI model, take a look at LM Studio or Ollama.

I’d love to know what you think, if you decide to try the model out. Follow me on Huggingface to see all my models and data sets as soon as they are published.

My OpenClaw (Mis)Adventure

Lobster with an OpenClaw, on fire. On a Mac.

If you follow Artificial Intelligence at all, chances are you have heard of something called Clawdbot, or Moltbot, or OpenClaw. They are all the exact same thing: a virtual assistant with infinite memory and the ability to give itself any skill you or it can think of. It has complete control over its environment for better or worse. It’s highly dangerous to run on any system with access to sensitive information but also incredibly powerful, when it works. And it only takes 1 terminal command to get started.

People across the internet has given this bot its own computer to live on, either virtually, or in many cases, on a Mac Mini. I happened to have a Mac Mini sitting around, so why not give this OpenClaw thing a try?!

The Ultimate Personal Assistant

A personal assistant more or less equivalent to a junior employee. It would have its own email address and its own github-account, which I ended up cleaning out, and I set it up to be able to communicate directly with me through discord at any time. That was definitely a favorite feature, being able to talk to my private AI through my most used communication tool was easy and useful. I also gave it access to read my calendar and my email, even drafting emails on my behalf (but not sending them).

I got it to build a cool dashboard with an embedded kanban board for planning and tracking its work, and various widgets for monitoring token use, chatting with the model, etc.

V1: The “Free Mode” Massacre

At first it was awesome…

New features were built faster than I could type the next prompt. In less than an hour, I had 50% of everything I wanted already set up and running. The next day, I was 75% through making all my wishes come true.

I did notice that letting an AI essentially build out its own capacities and tools burns a lot of tokens. I later found out that the way OpenClaw works adds significantly to that token count, by not being very efficient with what is included every time you send another message. In response to this token burn, I started experimenting with alternative solutions, like using the free models available on OpenRouter. I obtained an API key and asked the system to build out Free Mode.

The idea behind Free Mode was to watch for rate limits and then gently cycle to the next model on the list, and so on until you got to the first model again. If rate limiting was still on, the system would revert to paid models. That was the plan, but I never got it working.

Instead, I ended up in a circle where the bot would continuously corrupt its own config file and immediately restart, breaking itself completely. Over and over. I had it build validation scripts which helped, but not for long (it eventually ignored the validation). I was so frustrated from manually fixing the same json file again and again, it can’t have been good for my blood pressure.

After 4 days, I killed the bot off entirely.

A couple of days later, I tried again. No more fluff, just the original set of features and image generation support. That was the scope. No relying on free compute, just being smart about which model to use for what.

V2: The Dashboard of Death

At first it was awesome…

Everything came together in quick iterations! First a basic dashboard, then the kanban, then another tab with the other widgets, one at a time. I got the calendar working! Image generation, too. It even had a sweet cyberpunk theme!

Then, at one point, I asked it to change the font on a header and the model deleted every other feature on the dashboard. I wasn’t too worried, what with its awesome memory tools and having created a special plan document outlining work priorities and guardrails. But it turns out it had also overwritten all of those things and now had zero memory of what had come before.

I armed myself with patience (okay, there was a lot of swearing) and tried getting the dashboard back together, but at this point it was stuck in a loop, exactly like the first one had failed on the config iteration it was now failing on the dashboard. It could restore about half the features at random but trying to add any of the rest would overwrite what was there and set me back to square one.

At that point it would have been easier and more fun to code it myself. Regardless, I soldiered on. A cron job was set up to automatically back up the entire home directory every 3 hours. Perhaps that could prevent catastrophic loss like that in the future, I thought. Only there was no cron job. The AI just told me there was, and like an idiot, I believed it.

The dashboard failed completely several times. I tried editing the rules for development to test everything before deployment, but none of it took consistently. I built specialized agents specifically dedicated to failure points, but the benefit was miniscule. When my API credits ran out, and I decided not to buy more.

Two lobster claws. torn off and broken.
OpenClaw? More like BrokenClaw!

After 4 days, I killed the bot off entirely.

Less Is More

OpenClaw is an impressive piece of software on the surface, but it is extremely finicky and though it can add new skills and tools fast, it does not work well with iterating on existing code. I was using Claude Sonnet and Opus for all the coding on V2 and it burned up at least 100M tokens in those 4 days with nothing to show for it.

Regardless of provider, if you want to run this, you will want to use very strong models for OpenClaw’s heavy lifting. Long context, reasoning and strong coding is crucial, but you can save a lot of money by not relying on the most expensive models for everything. For trivial tasks, you can even use small, local models without issue. But beyond that, you will find that errors and failures increase very noticeably as you scale model capacity down.

I found that Claude Opus was basically required for building anything beyond a single, standalone skill, but it is too expensive to use for everything. Claude Sonnet is great for many things but can also be overconfident and make horrible mistakes. The worst mistakes were made by Sonnet. For lighter tasks, I actually preferred Gemini 3 Flash Preview to Claude Haiku, as the latter had more mistakes. Finally, I started using a Qwen 2.5 7B model for running the heartbeat every 30 minutes. I tested several other models as well, but none of them stood out in a positive way. For the record, the only model I tested from OpenAI was gpt-oss-20B for lighter tasks in V1, but I did not like its performance at all.

Your mileage may vary of course, but for me running and maintaining an OpenClaw assistant is not even close to worth the expense both in time and money. It’s too easy to break, and it does so frequently. With AI coding tools being so good in general now, if you know how to code, you can oversee the work and create a custom framework that does exactly what you need without the overhead.

Learning from what worked and didn’t work with OpenClaw was great, though, and I can’t wait to build my own system to replace it. Personally, I prefer building scalable solutions specifically addressing real needs, over taking a big, open framework and try to force it to work for me.