Okay, my experience using a greenscreen - with the same intention as I think you desire - was that I could technically make it work, in the studio it was no problem (~infinite setup time, and once it is set, it stays, and lights always under control) but on stages it was sometimes a challenge. I’ll attach some photos, they explain themselves i think, if this is something you’d like to try I can share more experience on the exact details and challenges, here is general stuff I think i’d wish I knew at the start:
(This is for a indie/small class electronica/art project, performed as solo/duo/trio, in small venues, with small budgets, some minor government funding to help with gear investments and crew fees, but the only really expensive thing is the powerful laptops for VDMX)
I travelled with a simple telescope screen and green fabric to cover 2x3 meter behind me and 1 meter floor, gaffed down. It used three white lighting spots, two on each side to make the green balanced, and one in front to light the subject (me). I also added some more USB lights at the instruments station, to help other cameras pointed to my hands/details that was also filmed, these lights also coincidentally helped with face/subject lighting. I then had a “instrument zone” and “full body” zone, so I could either perform playing, or perform “acting” for full keyed out insertion. I used USB webcams and NDI network cams, and got pretty good latencies. The worst delay usually came from the venue’s HDMI repeater or processing between my output and the projection.
Often in venues we could get assisted light from the rig/truss, and reduce the amount of spots needed, but that meant the lighting person would need to remember the cues for when to add that, they sometimes forgot so was more of a problem than a help, I prefered controlling lights myself. If you always travel with your full crew this could possibly work very well. I can’t afford that yet.
If me (or my crew) had the full stage/venue + enough time we could balance lighting and keying okay. Eventually over time I got good at rigging and knowing what settings I had to adjust and could make the greenscreen quickly set up and work in challenging environments, but NOT always. Over time I then learned to kind of incorporate “bad keying” into my aesthetic. I think this knowledge only comes through experience, so you need to set it up many times and then you know what works where, and also adjust the visuals and your performance to fit within the parameters available, it will never be perfect (but sometimes it will!). It was also a challenge to run regular concert lighting and the keying, see one of the example photos the keying glitches out (which was okay for MY aesthetic), but also here over time my light designer and me got good at knowing how to balance for each other, we ended up with doing some parts of the shows optimized for keying, and some parts optimized for lighting (I turned off the greenscreen lights for those parts).
I totally get your concert about how ugly it is. It IS ugly! And so loud, visually extremely arresting, especially for the audience. The projected visuals (the result) needs to be both much larger, and also very alluring and entertaining, to draw the audience away from the green visual siren. For me, I incorporated this aesthetic deliberately in my work, and never tried to hide it. The concert/performance deals with media manipulation and digital/screen-based life and the issues we face when communicating in multiple realities, so artistically and conceptually it made sense to show the “real” part next to the “fake” part, and using this to max effect. Audience testing and my documentation shows that people generally watch the projected image, but they “reference” the performer zone now and then to verify that it is happening live. So I do know that people quickly forget about the greenscreen as a “thing”.
Eventually I got a bit tired of both the complex rigging/keying/tuning process, and also tired of the aesthetic, and I think I cannot use this trick for every tour, its kind of a once-in-a-while-trick effect. (Or I would have to embrace it and keep it forever, and i’m not that kind of artist). For the current tour, I just dropped the chroma keying completely, placed myself/musicians into the projection itself and then masked us out with dumb soft masks, so it looks like we already are inside, hard to explain, but this actually works just as well (for my aesthetic and intentions right now). And it is a LOT less rigging :)
But I’d still really like to nail this, and preferably without a green screen in the long term future. I’m currently looking into depth tracking and/or IR solutions, but no have not gone in any specific direction yet. My biggest concert with this route is latency, I have yet to find acceptable latencies with depth tracking, particularly as it builds up with all the steps needed. But I’m hopeful, I think in a few years particularly the gaming industry will push us towards a platform we artists can benefit from so keeping an eye on that :) So I keep working and when the tech is ready, my content will also be ;) I’d be happy to keep sharing experiences and findings and if you have any questions how to do the greenscreen at least, if you choose that route, I can answer as far as I can.