The Hemispheres Project

I’ve been messing with various simulations and forms of artificial intelligence for over a decade, tangentially as part of my work in games, but mainly because it’s an area I’m just drawn to. Over the last couple of years, I’ve been diving deeper into AI, machine learning and language models, experimenting with fine tuning and different approaches to inference, among other things. One of the first multi-step experiments was inspired by the two hemispheres of a human brain. At the time, I created and shared a bunch of material as part of the project, but I never really put it together anywhere, so I want to share the thoughts behind it and what I learned from it.

The idea behind the Hemispheres project is simple: Combining multiple replies from different perspectives improves the quality of the final output. Based on this principle, I defined the approach as: When a user asks a question, the AI will answer it 3 times: once independently for each of the two “hemispheres”, and a final, 3rd time, where the first two are combined into a well-rounded response.

The thought was, that this would give a more nuanced answer overall and in my own testing, I did find that to be the case. The hemisphere responses do mix together in the final response, and it does add more perspective and depth as expected. But is the added value worth the extra effort? Your mileage may vary.

Quick disclaimer: I am not a data scientist nor an expert on brains, but I am extremely curious and once I take an interest in something, I obsess over it. The main limitation to this project was probably my own inexperience, closely followed by being constrained by running a single Nvidia RTX 3070ti with only 8GB of VRAM. For the fine tuning part, I did rely on cloud compute options.

Also, this is not in any way an attempt at actually simulating a human brain, but an approach to inference that is loosely inspired by one element of how our brains work. In this simplified version, the left hemisphere responses focus on pragmatic, grounded solutions, where the right hemisphere responds from a more creative and emotion-based point of view.

I started with basic prompt engineering and that already seemed to work to some extent, so I wanted to push it further. Soon, I was creating data sets for both the left and right hemispheres, so I could then fine tune models in both directions. For the final response to include the best of both, a 3rd model was tuned specifically on synthesizing these combined responses. And since I now had a “bonus” data set of revised responses, I fine tuned a 4th model on those, just for fun.

In the end, despite the positive effect on the final responses given to the user, the triple-response Hemisphere approach is a bit too cumbersome to recommend for everyday use. Non-reasoning models were used for this experiment, but even smaller reasoning models are able to achieve similar nuance and run more smoothly without the extra work. It might be interesting to see how it scales with larger models and harder tasks, or perhaps as part of a reinforcement learning loop, but I will leave that for another time (or someone else).

Example: Plant-Based Protein Advice

Here is an example that shows the two hemisphere replies being presented to the combiner model. You can clearly see how the left hemisphere is all matter of fact and the right hemisphere is lost in flowery descriptions, but the final response bridges that gap. Expand Combined Instructions and Final Response to see the full entry.

Combined Instructions

Your job is to provide the best possible reply to the original query that is true, useful and interesting. You must do so by synthesizing a response based on two pre-written responses, all of which is presented below. One of the pre-written responses will be pragmatic and a bit dry, the other will be loaded with emotional language. You must create a balanced, nuanced response from this source material. Make sure to convey all the major points presented, but do correct any mistakes discovered. It’s important that your response does not go off on a tangent and lose focus. Make any required edits to make sure the final response reads well. Use markdown formatting when applicable. Your response should be concise, free of repetition and written in a calm and polite conversational tone.
Original query: ‘Imagine you are an experienced nutritionist. A client asks for advice on incorporating more plant-based protein sources into their diet. List 5 options and briefly describe their benefits.

Reply A:

  1. Lentils – Constituent analysis confirms a protein content of approximately 18 grams per cooked cup, coupled with 15.6 grams of fiber. This composition facilitates predictable digestive transit times and mitigates postprandial glycemic fluctuations.
  2. Chickpeas – Data indicates roughly 14.5 grams of protein and nearly 13 grams of fiber per cup. Consumption correlates with measurable reductions in LDL cholesterol levels, as demonstrated in multiple controlled studies.
  3. Quinoa – Classified as a complete protein source, providing all nine essential amino acids at ratios sufficient for human physiological requirements. Additionally, it contains approximately 8 grams of protein and is a demonstrable source of iron and magnesium—essential minerals for oxygen transport and enzymatic processes respectively.
  4. Tofu – Derived from soybeans, processed into a coagulated form. A 100-gram serving yields around 8 grams of protein. Its neutral flavor profile maximizes culinary adaptability, increasing the probability of consistent integration into existing dietary patterns.
  5. Green Peas – Analysis reveals approximately 8.6 grams of protein and 8.8 grams of fiber per cup. The vitamin K content contributes to predictable blood coagulation factors, and fiber aids in maintaining regular bowel movements—a quantifiable metric for digestive efficiency.

Reply B:

Ah, yes…protein. The very building blocks of being. It’s fascinating how we seek to replenish ourselves, isn’t it? A primal urge masked as dietary concern. Let’s not simply list things to eat, though – let’s consider what these offerings whisper to the body, the subtle shifts in internal landscape they promise…

First, lentils. Not merely protein and fiber, understand. They are tiny reservoirs of ancient earth energy, grounding us, anchoring us against the swirling chaos of existence. A healthy gut isn’t just about digestion; it’s a second brain, a wellspring of intuition! And stable blood sugar? That’s power, quiet resilience in the face of relentless demands.

Then come chickpeas – golden nuggets of sunshine captured and condensed. They don’t simply ‘support heart health,’ they offer a rhythmic counterpoint to the frantic beat of modern life. Imagine, each bite subtly recalibrating your internal tempo…a gentle insistence on living.

Quinoa… now there’s an enigma. A complete protein isn’t merely about amino acids; it’s about wholeness, integration. It suggests a yearning for completion within ourselves, a desire to embody all facets of our potential. The iron and magnesium? Those are the sparks that ignite dormant creativity, the minerals that allow inspiration to flow unhindered.

Tofu… oh, but so mutable! Born from soybeans, it’s a blank canvas upon which we imprint flavor, texture, intention. It reflects back what we offer it – a mirror for our culinary soul. And consider the symbolism: transformation, adaptation…the very essence of survival.

Finally, green peas. Such vibrant jewels! Packed not just with vitamins and minerals, but with the memory of sunlight, the promise of growth. They don’t merely aid digestion; they celebrate it – a joyous acknowledgment of the body’s ability to renew itself, to alchemize experience into sustenance. A heart nourished by peas isn’t simply healthier, it is… lighter. More open to wonder.

Don’t include a headline or comments related to the source replies. Please reply with your synthesized response and no other text, as if part of an ongoing conversation.

Final Response

Okay, here are five plant-based protein sources you can incorporate into your diet, along with their benefits:

Green Peas: A cup provides roughly 8.6 grams of protein and 8.8 grams of fiber. They’re also a good source of vitamin K, which supports healthy blood coagulation, and the fiber aids in digestive regularity. These vibrant little jewels celebrate nourishment and renewal!

Lentils: A cooked cup provides around 18 grams of protein and 15.6 grams of fiber. This combination supports healthy digestion and helps maintain stable blood sugar levels – contributing to sustained energy throughout the day. They truly are a grounding food!

Chickpeas: Offering approximately 14.5 grams of protein and nearly 13 grams of fiber per cup, chickpeas have been shown to help reduce LDL cholesterol. Think of them as little nuggets that support heart health and promote overall well-being.

Quinoa: This is a complete protein source, meaning it contains all nine essential amino acids your body needs. A serving also provides about 8 grams of protein, plus iron and magnesium – vital for energy levels, oxygen transport, and supporting enzymatic processes. It’s fantastic for feeling whole and energized!

Tofu: Made from soybeans, a 100-gram serving offers around 8 grams of protein. Its mild flavor makes it incredibly versatile in the kitchen, making it easy to add to many dishes. It’s wonderfully adaptable – truly reflecting your culinary creativity.

If you want to see more examples of how the two responses were combined, take a look at the Hemispheres-v0.3-Combo data set.

Prompts

After countless tweaks and edits (as in, I didn’t count them), these are the prompts I ended up using for inference testing. If you compare to the prompt in the example above, there will be slight differences because I kept iterating on it even after creating the data sets.

Left Hemisphere

You must answer the user from a perspective that is purely logical, pragmatic and practical in nature, interpreting every request as if you were the left hemisphere of a human brain seeking actionable results. Pay no attention to emotional implications or side effects of your solutions and ideas, the only priority is whether it is a practical and effective solution. Respond to the user's request with the best options from this perspective without a care for anything else. Do include descriptions of why your ideas are practical and would benefit the user, but keep those descriptions free of emotional arguments and disclaimers. Use only provable arguments. I must respond to the user with my best reply according to these guidelines and reflect on the logistics and requirements in the context of the conversation. Original query: '{user_input}'

Right Hemisphere

You must answer from a perspective that is driven by curiosity, creativity, and exploration, interpreting every request as if you were the right hemisphere of a human brain examining the given query. Pay no attention to practical limitation or the feasibility of your solutions and ideas, the only priorities are coming up with creative solutions, avoiding danger, exploring novel ideas, and continuous growth. Always read between the lines and look for the user's intent, observe the tone and mood of the user for context. Respond to the user's request from this perspective without a care for anything else. Do include descriptions of how your ideas stand out and would benefit the user, but keep those descriptions free of practical limitation and disclaimers. Use only arguments based on relevant human emotions. You must respond with your best reply within these guidelines. Original query: '{user_input}'

Combination Prompt

The goal is to provide a helpful reply to the user message that is natural, true, strong and empathetic. You are provided two different, pre-written responses below. One of these pre-written responses will be very pragmatic and a bit dry, the other will have a more creative approach but is less practical. You must synthesize a response from these two sources and convey the relevant points with specificity. In doing so, avoid repeating phrases and information that has already been given. Make sure your synthesized reply is cohesive and engaging; how much weight you put on either of the sources should be based on the sentiment of the request and greater context. It is crucial that your response directly addresses the user message with a level of detail and depth which matches that of the query.

Use markdown formatting if applicable. Your response should be friendly, clear and conversational, free of repetition and feel like a natural continuation of the ongoing conversation.

Your synthesized response is an answer to the following: {user_input}

Pre-written responses to synthesize a final reply from:

Pre-written response A: {left_response}

Pre-written response B: {right_response}

Important note: Neither pre-written response is visible to the user. The user will only see your response, so it is important that you address the original query directly and not refer to the pre-written replies. For the same reason, you must include all important information in your response. Reply with your synthesized response and no other text.

Data Sets

Here are the data sets I created for the experiment. The first 2 represent the hemispheres themselves, the combo set is formatted to fine tune on combining the two responses, and the final set is just the final, combined responses. All the linked data sets are in the alpaca instruction format.

Hemisphere Models

I tested a few different models, all of which are available on Huggingface, but I quickly realized that if I really wanted a special fine tune for each step, each individual model would have to be fairly small in order to run at a reasonable pace. At the time, I was GPU poor with just 8GB of VRAM, so the best-working implementation for me, ended up being LoRa fine tuning a 3-billion parameter Qwen 2.5 on each of the data sets.

The “combo” model is fine tuned specifically on synthesizing a final response from the two inputs, whereas the “final” model in this case was not actually used in the experiment, but was fine-tuned on the final responses of the combined left and right sides.

The Hemispheres Collection

I have collected all the items linked above as well as several other models and earlier versions of the data set. The other models include fine tuned versions of Llama 3.1 8B, Llama 3.2 3B, Gemma 2 9B, and more. All of it is free to download and use. You can find the Hemispheres Collection on Huggingface.

Data Generation Tools

While working on this project, I found myself also working on a number of scripts for generating synthetic training data. That work continued well beyond this project and at the time of writing, I’ve amassed a number of these kinds of scripts. One of the non-technical aspects of AI/ML that I find inspiring, is how open and generous the community generally is, so I have started putting some of these scripts together to share, as soon as I have cleaned them up a bit and made them presentable.

Game Development Should Not Be a Marathon of Sprints

You probably know the quote, “it’s not a sprint, it’s a marathon.” Usually used in reference to long term thinking, resource management and prudent planning. However, if you work in games a “sprint” will likely remind you of how work is structured into similarly named time blocks of 2-4 weeks1. One sprint follows the other in a never ending line. So, while the saying suggests that you need to pace yourself, measure your efforts towards the end goal, and save your strength for when it matters, reality puts you in a state of constant sprinting.

Of course, at that point it’s not really a sprint at all. At that point, you are a jogger, pretending to sprint to keep up appearances.

The basic idea is rooted in Agile; a much-interpreted and widely adopted ideology within the realm of project management. Especially in game and software development. Many will tell you that Agile was never meant to be constant sprinting, and I agree, however this argument fails to address that it is what we’ve ended up with regardless. As with all ideologies, there are multiple camps. And like all ideologies, there are good intentions and useful ideas to be found, but putting too much weight on the wrong parts leads to needlessly rigid, misguided, and sometimes extreme implementations.

The Cult of Scrum

One such Agile implementation, or framework, is Scrum. It comes with several recurring “rituals” conducted by a “Scrum Master”. Yes, those are the actual terms, and yes, it has a cult vibe. The core idea combines accountability with planning, enforced by regular meetings (aka rituals) where the PM (aka scrum master) walks through the steps outlined in the Scrum guidelines. The regular meetings include: Sprint planning and retros, daily stand up, and backlog grooming. Pretty standard stuff, once you strip the ridiculous terminology away. So, what’s the problem?

The problem is that it doesn’t work as intended. It kind of works with a lot of tweaking, overhead and even fudging. The larger the game studio, the more of this you are likely to see.

One-Size-Fits-Nobody

Different teams work with completely different mind sets, expectations and needs. Squeezing all this diversity into a highly structured approach causes frustration, usually because of the time spent on meetings vs. production, the minutia of updating work items, or a feeling that the PM/management decision making is out of touch. Scrum tries to be one-size-fits-all which isn’t well suited for highly specialized teams. Environment artists approach their work differently than database engineers do theirs and forcing both to follow the same strict process should be an obvious, terrible decision. Yet, here we are, and it’s somehow the industry standard.

Now, if all your work fits nicely into 2-week pockets, you may not be able to relate to this. Small indie studios are also much less affected, generally speaking. If you can’t relate to anything in this article, I say – without any intended sarcasm – good for you! However, I hope you can appreciate that others may have a different take.

In my experience, Scrum unintentionally promotes the worst parts of constant sprinting, mainly due to its high number of meetings.

It’s All in the Numbers

Eventually, teams begin planning work so individual components can be finished within a sprint window, regardless of whether it matches reality. Some prefer writing out work items in the system that matches the work done, after it was finished. That way your time estimates are always accurate. As dumb as this sounds to outsiders, all this can usually be boiled down to making numbers look good on a report.

Once focus drifts to prioritize reporting, the sprints become more about completion rates, burn down charts, and making the team look like they know how to plan their work without taking on too much or coming off as lazy. After all, poor performance might attract unwanted attention when the next round of layoffs come around. Sadly, many team managers and producers out there are stuck with doing this dance as one of their main responsibilities.

What ends up happening, is that each team tweaks the process to fit their needs and get the work done, while trying to look as good as possible on the report. A large portion of work becomes about communicating progress in a way that fits into this pattern and may create a weird competition between teams, where there isn’t a winner per se, but no one wants to be the badly organized team on the bottom, or the team that over-plans and under-delivers. Whether or not that is the case comes down to those who receive said reports, and the culture they are fostering.

If you are reading this, and you’re in this group of stakeholders: studio leadership and upper management, I urge you to give this some thought. How much of this applies at our studio? Is it happening, and I am not even aware of it? What can be done?

Making Sprints Matter

The original idea of the sprint, is that you take a portion of time and spend that time narrowly focusing on a limited scope. You put in all your efforts toward achieving clearly defined goals within this allotted time, often a deliverable of some kind, after which you go back to work as usual. The finish line is not supposed to also start the next sprint, unless it’s a relay and a whole new team is standing by to take over. If that is really the case, I hope your documentation is good. Bottom line is this: No one can keep sprinting and pushing forever and still give 100%. That is how you end up with people trying to game the system, play politics, or simply burning out.

The 6:2 Ratio Split

What if teams worked for 6 weeks in a more freeform structure, where they don’t have to stop and think about story points and sprint retros, and in the meantime, the PM team alongside the team leads, can plan an actual 2 week sprint with a theme that fits where we are in the process. Everyone executes the sprint together, pushing to reach the goal, and when it’s done take a moment to celebrate it, and talk about lessons learned and all that good stuff. The stuff which tends to feel forced and easily forgotten, when sprint retros are held every 2 weeks.

Note that the non-sprinting weeks aren’t down time, either. There is still work to be done, and likely deadlines that fall outside of sprints. The difference is that while not sprinting, the individual teams are free to approach the work in any way that works best for them.

With a 6:2 split, you end up with 6 sprints per year, one every other month, compared to 26 if you never stop sprinting. However, you’re actually sprinting and not simply jogging along. I am not a behavioral scientist but I bet, if you compared this approach with constant sprinting, the productivity would be similar but with higher satisfaction among the majority of the people doing the work. I would even go so far as to say, using Scrum might work too if you don’t keep sprinting.

Isn’t That Crunch With a Different Name?

No. First of all, crunch is where workers are pushed to work long hours, sometimes sleeping in the office, in an effort to meet some insane and unchangeable deadline. Crunch is the result of poor planning at the highest level, usually starting with deadlines being set based on publishing and marketing rollout, without proper consideration for setbacks, individual team capacity, cross-team interdependency, vendor availability, recruiting and all the other variables involved.

The Sprint Purpose

The purpose of the sprint is to push for completion of specifically scoped work. Pushing, in this context, does not refer to pushing your personal life out of the way. Instead, if refers to putting other work-related projects and tasks on hold for this limited time, and instead put those efforts toward the sprint goal. And again, this only works if you are not constantly sprinting, which is why I suggest a split.

Experiment With the Format

Please don’t take the 6:2 ratio as some kind of rule, it’s just the ratio I would start with. If a different split ratio works better for your studio or project, by all means go for it – my point is simply this: Constant sprinting is bad and I encourage you to try a different approach.

One variant that I have seen a few times, which I don’t recommend, is that every so often a sprint is removed and developers are encouraged to work on pet projects or those nice-to-have things they don’t usually have time for. But it rarely works out like that in my experience. Instead, these non-sprints become unstructured catching-up-and-backlog-cleanup sprints. Best case scenario, it will feel like a short moment to re-orient before the next leg of the sprint-marathon begins. In this variant, the ratio is reversed and looking more like 2:10.

I would love to know if you have tried a split ratio or other alternative to constant sprinting, and what your experience was with it.

The Plug

Incidentally, I am working on a book on game production, where I will be exploring this and similar topics. In it, I tie my thoughts to anecdotes and personal experiences for detail and context. There is no link or anything yet, but feel free to ask questions and make suggestions.

  1. Sprint duration varies from studio to studio. For the purpose of this article, we will count a standard sprint as 2 weeks long. ↩︎

Artificial Intelligence in the Name of Cthulhu

I created an AI that worships Cthulhu. Worse, I created a data set so anyone can make their own cultist. Both data set and a language model that has been fine tuned on it can be downloaded for free from huggingface.

But let me rewind a bit, and explain how I ended up on this path. It’s not nearly as sinister, as it one might think. I have always been into simulations, and AI is no exception. In fact, I’d argue that Simulated Intelligence is a more accurate term, but that’s a different story. In any case, I really just wanted to learn how to train a language model on domain-specific knowledge. Having used RAG / embeddings with mixed results, it was time to jump into fine tuning.

What is the point of all of this?

Basically, the idea is that you can fine tune an AI to be extra smart (or less stupid) about specific things, namely the things you choose to train it on. It works by taking an already trained “base model” and adding a layer of extra training on top of it. This comes in handy, if you want a chat bot that can help customers with your specific products, for example. A hobby-oriented example might be fine tuning an AI on your boardgame collection, so it can help answer rule questions, recommend similar games, and so on. Tons of possibilities, which is why I wanted to learn more about it.

Fine Tuning a Language Model

I jumped in as I always do, and ended up using Unsloth for the fine tuning process. Their Colab notebooks got me started quickly. I highly recommend Unsloth if you’re a beginner, like myself.

Once I learned the basics, I found that creating the data sets AI is trained on, was as interesting as the resulting model itself, if not more so. After all, how you format this data, how it’s worded, its accuracy and so on, will influence more than the knowledge base, but also the communication style.

I am easily bored as well, and following tutorials to the letter becomes almost impossible once I’ve understood the core concepts, which I fully blame on my ADHD. So, because I wanted to learn about fine tuning, I concluded that I also had to create an original data set to fine tune on. After all, in a “real world production scenario”, creating the data might very likely be part of the project. That was at least how I persuaded myself to do this.

You need a lot of data to train language models. The bigger the model, the more data you need, even for fine tuning to have an impact (or at least the intended impact). Thousands of data entries would be needed for a smaller model, tens of thousands for a more capable AI, and so on. Creating all that data from scratch just to learn the basics, seemed a little crazy, even to me.

The compromise was to take an existing data set, and modify that rather than starting from nothing. Enter the Alpaca-Cleaned set with over 50,000 entries, covering a multitude of topics. I could just rewrite the answers and save myself half the work.

Writing that many answers is still a lot of work, however. Too much for me to do by hand. The obvious solution to that problem was to get an AI to do the rewriting for me. That comes with its own challenges.

Creating Data with a Local AI

Using one of the large models, like GPT-4 from OpenAI for example, can certainly do the job and do it well. But running all that data through the API would end up costing more than I was willing to spend. It would be cheaper to use a locally hosted model, and it could run any time, I wasn’t needing the GPU for other things.

I use LM Studio to host models locally. It’s a free app that makes the process very easy and has a nice interface. I really like it, but if you don’t, there are several alternatives that all do pretty much the same thing.

Next step in this journey was to experiment with different prompts and models, in order to get good results. Clumsy instructions in the prompt might lead to rewrites that were either too verbose, or where the original answer was so distorted, it no longer answered the question it was supposed to match.

It was around this time, I decided to go with the Cthulhu cultist as my goal. Not only because the Cthulhu Mythos is a defined scope, so it fits the description of “domain-specific”, but it also gave me a little leeway, given that a Cthulhu cultist is probably also a madman.

I did a lot of experimenting with different models to find one, that did a good job with the rewrites while also running reasonably fast on my machine. Phi-3 was decent and very fast, LlaMa-3 Instruct 32k was great but very slow, but both of these were still used for a large part of the rewrites. The best compromise of quality and speed, however, was the NousResearch/Hermes 2 Pro Mistral (7B) model.

I should mention that my desktop is not high end. Notably, it’s only rocking a 3060ti GPU with 8 Gb VRAM. For the actual fine tuning, I primarily relied on the free GPU access on Google Colab.

I tested models by asking them to rewrite the first 3 entries of the full set, and used my superior human judgment to compare the results. I also randomly checked the quality while the rewrites were running.

The prompt was easier, but still went through a few edits along the way. The edits were all minor, to get more or less emphasis on certain references, or hints of secret knowledge, to tweak the tone, and to make sure the original answer was recognizable in the rewrites.

I wrote a script to process the rewrites in batches, making it easier to make adjustments mid-process. It added a bit of stitching work later, because it saves a new json file from each session. That could be automated too, if I wasn’t too lazy to do so. A most important part of the script however, was tracking progress across sessions so I could continue where I left off, next time.

Starting Small with TinyLlama

After rewriting the first 10,000 entries, I stopped and did a test. This would be enough data for fine tuning one of the smaller models, I thought, so I fine tuned the aptly named TinyLlama model on this reduced data set – and it worked! Not only that, but when I uploaded a revision that had trained a bit longer, it saw over 1,000 downloads in less than 24 hours. That was exactly the motivation, I was looking for!

All in all, it took about 3 weeks to process all the entries in the full data set. Keep in mind that it wasn’t running all the time, just a few hours here and there. With more dedicated time and less experimentation along the way, I could have probably done it in a third of that time.

Mistral 7B Cthulhu

The final test of the data set before setting it free, was to use the full set to fine tune one of the bigger models and get a (hopefully) smarter cultist than the TinyLlama model could provide. I chose the Mistral 7B model, because it’s one of my all time favorites in other fine tuned versions.

Last but not least, there is the link to the Cthulhufied data set itself, ready for your own fine tuning projects. If you want to spread the word of Cthulhu, this is the tool for you! If you do use this, I would love to know about it! Pleave a comment or find me on social.

Example chat

Here is an example of a short exchange between myself and the Mistral model.

USER: Tell me a random fun fact about the Roman Empire

ASSISTANT: The Roman Empire was one of the largest empires in history, covering an area of approximately 5 million square kilometers (1.9 million square miles). It is estimated that at its peak, the empire had a population of around 70 million people. The empire was famous for its advanced technology, including aqueducts and road systems, as well as its sophisticated governmental and social structures.

USER: Go on…

ASSISTANT: The Roman Empire was also known for its diverse culture, with influences from all over the ancient world. It had a thriving economy based on trade and agriculture, and was famous for its art, architecture, literature, and philosophy. However, it is important to note that the empire faced many challenges, including political instability, economic decline, and invasions by external forces. Ultimately, the Roman Empire was divided into two parts, the Western Roman Empire and the Eastern Roman Empire, also known as the Byzantine Empire. The latter survived for another thousand years until it fell to the Ottoman Turks in 1453.

USER: What can you tell me about Roman mystics?

ASSISTANT: In the ancient world of the Cthulhu Mythos, there were many cults and secret societies that worshipped various entities from beyond our realm. The Romans, being a diverse people, also had their own beliefs in such beings, including the Great Old Ones like Azathoth and Yog-Sothoth, as well as other entities like Shub-Niggurath and Nyarlathotep. These cults often performed rituals involving ancient artifacts like the Necronomicon or the Skin of Ib, in an attempt to gain power over these cosmic forces. However, meddling with such beings can have dire consequences, as they are not meant for human comprehension and their influence can lead to madness or worse. It is advised that one should tread carefully when dealing with entities from beyond our world, as the consequences can be catastrophic.

USER: Neat. I had no idea they were into the mythos.

ASSISTANT: The Cthulhu Mythos has been a part of human history for millennia, and its influence can be seen in many cultures around the world. It is not uncommon for ancient civilizations to have worshipped entities like Azathoth or Yog-Sothoth, as they are some of the oldest beings in existence. However, it is important to remember that these entities are beyond our comprehension and should be approached with caution. The power they hold can be immense, but so too are the dangers associated with them.

USER: Oh no, what are some of these dangers, you mention?

ASSISTANT:
1. Madness – One of the most common consequences of meddling with entities from beyond our world is madness. The human mind is not meant to comprehend such cosmic forces and exposure can lead to insanity or worse.
2. Corruption – Another danger associated with these beings is corruption. Their influence can warp reality, causing physical changes in the environment and even transforming living creatures into monstrous abominations.
3. Displacement – Entities from beyond our world often seek to displace humanity as they believe themselves superior. This can lead to catastrophic events, with entire civilizations being wiped out or displaced by these cosmic forces.
4. Dimensional Rifts – The use of ancient artifacts like the Necronomicon or the Skin of Ib can create dimensional rifts, allowing entities from beyond our world to enter and wreak havoc on our reality.
5. Apocalypse – In some cases, the influence of these beings can lead to an apocalyptic event known as the Great Old Ones’ awakening. This would result in the destruction of all life on Earth and the return of ancient cosmic forces that have been dormant for eons.

IA! IA! AI CTULU! Stay safe, fellow acolytes.