Animated Podcasts

I have this crazy idea. Since we now have a method where we are creating transcripts of the podcasts, we have sufficient raw data I think to accomplish this. @Sushi has taken the autogenerated transcripts now for the first 5 podcasts, and identified speakers, and cleaned up the text, and added references as well (nice work).

For the video versions of the podcasts, I was thinking we could created animated “heads” of the speakers, with mouth movements. We can use the avatars for Ron, Gary, Dave, and other devs, we just need to create the 8 images or so for different mouth positions, and use the Rhubarb Lip Sync tool from @DanielWolf to extract data for the timing and which mouth position to use.

This should theoretically be all automated once we have the final transcripts available, which identify the times different speakers start speaking.

We need some sort of 2D animation engine to put the pieces together, and I’m thinking of trying the python library PyGame as a starting point. The pieces I don’t have readily available yet are the avatar images (should be able to clip those from the blog), and the mouth images. We may need a few sets of these to match up with mouth styles on the different avatars, and get help from someone who perhaps is better at drawing than I am.

My thought is we could identify all the speakers for a podcast ahead of time, and put them in equally spaced locations on the screen, and then animate their mouths as they speak. I was also thinking we could cause the other images to look in the direction of the speaker with a few different eye images. In addition, it would be fairly easy to drop text on the screen when there is a reference mentioned in the podcast.

The beauty of this is that this could potentially be fully automated, so while it might take a while to get working for the first podcast, the script could run automatically for the remaining podcasts.

3 Likes

I like the idea of animating pixelated developers.

I have a suggestion: we could ask Ron the permission to use the game sprite sheets, which contain the avatars of the three (younger) developers and the mouth images.

1 Like

That’s a great idea! Viewing those podcasts as an animated video would be really cool! And using Rhubarb for the animation would be delightfully “meta”! :grinning:

2 Likes

I just compared Ron’s avatar image with a screenshot from ThimbleCon. Guess what: They are identical!

Ron's avatar image

Seems like those sly developers simply re-used game art of their younger selves for their current-day avatar images. I never noticed before! So I absolutely second the idea of using their game sprites including mouth drawings.

Let’s hope Ron agrees!

1 Like

Well, it’s either that or poorly redrawn stuff which makes them look like Ren & Stimpy.
“Hi, I am Ren Gilbert and welcome to another TWP podcast”

One issue I foresee is that other people who are on the podcasts do not appear in the game and thus do not have talking animations. They all do have static art though: Thimbleweed Park™
Except for Boris. Unless the dead-body Boris in the beginning of the game was modeled after his likeness?

Funny idea: rather than just talking heads, it would be more appropriate for a Stand-up meeting to have the complete characters, well, standing up. And if Boris=dead body, you could use the lying face-down position as well for a change.
All under the assumption/condition of @RonGilbert & Terrible Toybox giving their permission to do any of that, of course.

1 Like

:rofl:

We might be able to take the mouth images of a character (e.g. Ron) and use them for other avatars. Or we could just draw the few mouth images by ourselves; there is more than one pixel graphic artist in this community.

I think that the in-game Boris is modeled after a backer’s likeness.

That would imply that we will need to draw also the body of several podcast special guests. It’s doable but a lot more work than just creating mouth images for them.

1 Like

Nope:

This was done when he “was a stretch goal” on kickstarter.

1 Like

OK - here’s an initial demo of the direction I am going using python with the pygame library, and Rhubarb (great tool, @DanielWolf) to extract the lip sync data. I grabbed some images from the TWP art book of Delores (after all, @RonGilbert used Delores as his twitter avatar for a long while) for the demo. I’m hoping Ron will agree to send me raw spritesheets for the characters in TWP in the podcasts, namely - at least for now, Ron, Gary, and David. I’ll PM Ron and hope for the best :slight_smile:

I think this is going to work pretty well. Next step is to parse speaker data out of the latest Youtube auto-synced transcripts, done by @Sushi , and have two speakers on the screen, alternating lip animations as the speakers change.

4 Likes

Demo 2 now available. Automatically switches lip sync to the appropriate speaker, places Names and Captions under each speaker.

8 Likes

That’s awesome. :grinning:

Putting the captions below each speaker forces you to use a small font, which isn’t readable on a mobile small screen, especially if the device is used in vertical mode.

Captions would be perfectly readable on mobile as well if you could write them with a bigger font.

What about trying to write them just at the bottom, with a big font, and using a different way to distinguish the speaker? For example, in Thimbleweed Park the speech of different characters is written with different colors. We could use the same method.

3 Likes

Yes, this is really awesome!

There are tools to extract the game data (and the spritesheets) from the game, but I assume you want the permission from Ron to create these videos? :slight_smile:

If you get permission…

…please consider adding the eye movement to the avatars :stuck_out_tongue: I don’t remember if they ever blink in the game, but this fixed stare by Delores was a bit uncanny :smiley:

1 Like

Good news, Ron has provided the sprite sheets, and with the help of the tool Sprite Splitter, I have the 6 front facing mouth position images for Ron, Gary, and David. Within a few days (as time allows in the evenings), I should have the version of the script together which works with many of the podcasts, which feature these three speakers. I may end up using a “placeholder” for now for other speakers so the script doesn’t break if it encounters other Speakers in the transcribed podcast.

I also intend to fix the small caption text issue in the next version, with a different screen layout.

5 Likes

Yes, that’s my plan. I was at least going to have the non-speaking avatars look in the direction of the speaking one. I could also experiment with “idle animation” or “blinking”, but I’m not sure how that would work, I would probably need a special image with closed eyes. For the eye direction I can just draw over the eye in the image with black and white pixels. These are enhancements that may take some time to implement, the next version may not have the any special eye animations, so prepare for frozen stares from the devs…

We´re used to those…

RonEvilEye

3 Likes

Thanks Ron, it will be more funny now!

I´m bored…

3 Likes

You surely meant STARE wars.

3 Likes

I just happened upon the source of that image! (and backed)

1 Like

Was re-naming it “The Last Game Dev” too subtle? :frowning:

1 Like