AI-assisted development of adventure games

I think Leornardo.ai can do more detailed images than the ones I made (and I think if you pay them you get access to higher res), to me it’s more of whether to train it on MI2 or not…

The MI2 art is very “busy”, and it’s harder to get backgrounds that are “logical” and have things in clear places.
The art in the video looks a lot more “ordered” to me than MI2 art (as well as being detailed), like I can tell what things are meant to be more clearly.

The MI2 art might be more controllable if you use a photo as a starting image for it to then redo in the MI2 style, I’m not sure if Leonardo.ai does that though.

Training it on some specific art does get you a good imitation of that style though, so that’s cool to know.

1 Like

I still don’t know almost anything about it, so I can’t answer.
But if you provide a sketch, it seems it will achieve any amount of detail that is in the sketch. :slight_smile:

that’s the technique I’m most interested in personally.

1 Like

Looking into Leonardo.ai more, it’s actually using Stable Diffusion it seems

1 Like

My hypothesis is that in order to create backgrounds similar to the ones in the video, an AI would have to be trained with digital art images that show nice “organic” scenes and many coherent details. I don’t think the original backgrounds for MI2 are well suited for this.

On the other hand, the MI2 Special Edition backgrounds might have been more suitable.

1 Like

I tried training the AI on the Simon the Sorceror art you posted previously, and I told it to make a theme park street -

It comes out with some dark and nice stuff, but too dark/ominous really…

Yeah, I’ll see if I can get a bunch of those…

2 Likes

Yes, it captures very well the dark side of that scene. While it’s not well suited for a MI3a, the results are very nice!

I compared the original Chan’s art with the digitally drawn background for MI2SE, and I think the latter is more “clean”, meaning it doesn’t have the typical features you get when you draw on paper and then digitize the image.

However, I’m not sure if this aspect would significantly (and positively) affect the generated images.

1 Like

Looking at the MI2 Special Editions, it looks like they’ll have the same issue, as everything is all at wonky angles that makes the AI do more confused looking stuff…

They are kind of garbled already, before AI has even got its hands on them!


In comparison the video images are cartoony, but more logical -


1 Like

Could the guy have used photos from a real theme park and asked the AI to use a “Monkey Island adventure game” style?

1 Like

Yeah, he may have even generated the initial photo using AI, like these -

1 Like

This clipdrop is not bad!

2 Likes

but in stable diffusion , with the Automatic1111 gui at least, there is no way to train a model from several pictures. img2img works with only one picture.

But there seems to be a plugin called DreamBooth that does exactly that: it takes 15 pictures and creates a model, or checkpoint:

1 Like

If the AI starts with an image, then I assume it’s a “style transfer” process, meaning the source image doesn’t have to be about theme parks, it just has to provide a style to mimic.

The point is… I can’t think of any game that has the style(s) shown in the video.

It’s not a crazy style like S&M or DoTT. It’s not MI (1-2-3-etc.). I don’t know many Sierra games, but I don’t remember anything like this.

Some shadows (on the tent, projected on the roof…) make me think that the starting point was a real photo and not the art of a game, but I could be wrong because newer generative AI can handle light quite well.

1 Like

This is the central problem, we don’t know what to put in to the prompt.

I tried using a site that said it could reverse identify the prompt from an image you give it - CLIP Interrogator - a Hugging Face Space by pharmapsychotic

I put in the two video images and got:

“a picture of a street with a ferris wheel in the background, the curse of monkey island, that we would see in the essoldo, low resolution sync, neosvr!!!, miraculous ladybug, desenho, [ red dead ], youtube video, antialiased, kano)”

“a painting of a restaurant on the side of a road, alice in wonderland 3 d, italian beach scene, pixar cartoon, otherland, april, inspired by Bartolomeo Cesi, gameplay screenshot with ui, city of atlantis, news footage, clear image, dixit card, leaked image”

But neither of those create a similar image when put back into an AI art generator (or using the most obvious parts of them).

1 Like

One interesting point is that when I ask Google Images to find visually similar images, the second image on the list is Big Whoop!

I get Big Whoop even when I cut out Guybrush and the user interface elements from the image, which may be too much of an influence on the search engine.

Maybe this suggests that that art was used for the “look and feel” and that the guy found a way to get more organic/logical contents.

1 Like

Yeah, it’s like it’s trained on just that one end background from MI2, and then applied to photos somehow

1 Like

I can rule out that the background in the video was the result of a “style transfer” to a real photo or good picture, though.

If that were the case, the guy wouldn’t have gotten the artifacts in the image, like the disappearing lampposts or the meaningless text. It’s more likely that the image was generated by an AI and that an existing image (like the Big Whoop background) was used to inherit an art style.

1 Like

Oh yes, and you noticed this at the very beginning when you pixelated high resolution images.

The overall effect in that video felt like a high-res image being resized, a bit like that fan-made pixelation of a background of RtMI.

I’m still trying to find a similar style. Maybe Bill Tiller’s?

2 Likes

Random things I have noticed -

-The AI often includes red and white striped things whenever I put in “theme park” as part of the prompt (regardless of whether I mention Monkey Island or not). And the video has red and white striped elements in the two main outside scenes, so I’m assuming that “theme park” is part of the prompt for both of them

-The first outside scene has clouds that look like Simpsons’ clouds, but I guess that’s a common cartoon cloud style

-The first scene is the one that looks the most “realistic”, while the one outside the Voodoo Lady’s shop is more cartoony (the house’s roof is all curved/angled, etc.). The realistic nature of the first scene may be a bit of a red herring?

-The scene with the map on the bulletin board is very similar to the MI2 Big Whoop scene, as shown below…

MI2 ending - noticed the ferris wheel, shape of the lights, bunting, and foliage
MI2endtopleft

Similar in MI2 special edition
MI2SEendtopleft

And in the video, that collection of things is present and also in the top left

1 Like

This is all very interesting, but I feel weird in using AI directly. Great tool for kickstarting your creativity but I don’t know if it can be used directly.

My friend Vance from StandOffSoftware was making a series of videos where he tries to code an adventure game using ChatGPT for puzzles and AI generated images.

The results are… questionable, but still it’s a nice experiment. I don’t know if he plans to carry on.

5 Likes

This is interesting! Considering that he was using ChatGPT to get “canned responses” without any training or instruction, I would say that the results are not extremely bad.

If I interpreted what you mean by “directly” correctly, I think a good next step for him would be to approach GPT as you said: more as an assistant than as the sole creative “entity” of the project.

That’s also what GPT was designed for: as something that needs to be “educated” before it can provide anything good in return.

1 Like