I think the podcast would be weird if mainly animated from a side view. I think it would be better to keep the speakers speaking to ‘you’ rather than ‘eachother’.
But if we wanted to try this, it would not be too complicated to do an experiment, as the images can be rotated from within the 2D ‘pygame’ library I am using in python for this project. I think I will bring in the side views, as having the actors turn toward the active speaker might be a nice improvement, at least, and it would be a nice setup to try side view speaking animations as well.
Nodding could be somewhat simulated with ‘bobbing’ of the front face view, if I put the top of the bodies in there too, which is another possibility… (all this might take a while before I can show a demo though).
Yes, that is my priority. But I welcome ideas, I think I can (mostly) avoid unnecessary distraction - unless it sounds really fun. I want to get feedback while people are thinking about it.
It would be nice if there was a background screen which would make it look as if they sat together on a couch in front of a C64, or at a regulars’ table in the S&D Diner.
Good and easy idea.
I have a much harder idea.
To make the whole thing less boring, sometimes the voices could go off stage to show clips that could be related to what de devs are saying. For example, the voiceover of them talking about animations could be matched with the nice stop motion video by Octavi… we have a lot of material from the dev blog, and a lot of other material could come from actual gameplay. But obviously this implies a lot of manual work.
What you are suggesting is like a full documentary. That would indeed take an awful amount of time. Finding a few good references to add to the text already takes me 15 minutes to half an hour on average per podcast of 10 minutes.
If you think the video is ‘boring’, you can also read the text only version I post here on the blog, more quickly . They have the advantage of being searchable.
I think the video should be like listening to the original sound-only podcast but with some aids to better understand what they’re saying, in terms of both form and content.
put the text ABOVE heads (easier to read the text while seeing the mouths moving and closer to how it is typically done in the games as well.)
If possible, color the texts per speaker (you could also color their names below correspondingly)
if you can implement point 2, you can get rid of the line connecting sprites with the text
Alternative to 3, you can go full comic style text balloons too. But that might open another can of worms to find the right balloon shapes and sizes to use.
Thanks, @Sushi, for those suggestions. I had some of these similar thoughts on where I wanted to go next with the captions., and it really helps to know you were thinking in the same lines. I am concerned that just color coding it’s clear enough. The brain has to associate the colored text with the speaker name, and that’s going to take some time for people to get used to (at least it would me). But the current ‘line’ solution, while functional, looks awkward, and I was considering the ‘text balloon’ concept as from your suggestion. I have been thinking of how I might code that, and I have an idea I want to try. I think it might be good to have a combo of the color match (for those that like it), and a text balloon as well. I might be able to fold that in the next version. I like the idea of the text on top also, and it would go well with the text balloon format, plus, I could keep the text closer to the speakers, so your eye doesn’t have to wander so far from the speaker to the text, back and forth.
This might sound as a very dumb question, but why do we need to write the text on video in the first place, if people can just switch on YouTube captions (and I mean: Sushi’s augmented captions) ?
It’s far from dumb. I considered just using the existing captions, but I felt since I already needed to load the timed captions to get speaker change info, I had them available in the program, and I could take advantage of that to display them in a more interesting fashion.
Would it be difficult to add a background image and bodies with simple hand gesture animations which would emphasize the current speaker? In case it wouldn’t be too difficult, I could create a background and some animation frames you could include to the video.
Using a background image is easy, but it runs the risk of cluttering the screen. We could experiment with this, but I’ve been holding off while I work on the base functionality.
Still body images also easy, in fact, I have these from the sprite sheets. There are maybe a couple of front-facing sprites, and some side facing sprites for walking. Making them animate in a meaningful way, controlling the timing and transitions of the animation - much more challenging, and I’m really not ready to go there at this point, since I know it would be a major iterative effort.
To consider it for a moment… It would involve picking the multiple images for the animation, and deciding on a cycling schedule and trigger. What sort of animation did you have in mind to emphasize the current speaker? If arm movements are involved, we would want to make sure there was some randomization, (repetitive arm movements would be pretty annoying), but no so random as to have, let’s call it the “Stan” effect. I could see that getting into a complex project.
Nice work. maybe put them inside the radio station booth? I would crop it like below. I would also get rid of the pointing line on the titles and make the font white like a more traditional movie subtitle
This is a supernice idea, graphically speaking, but it could led to the wrong consideration that the three devs were in the same place while recording podcasts and working on TWP, while one of the most impressive things of development is that they worked mainly in three different locations: Seattle, Felton and San Francisco if I’m not wrong.