I am trying to connect to more than one CR container. All the containers appear to start up correctly. Blender is running in each container and TCP/9006 is listening. When I connect to the first container, CR shows are syncing. When I go to connect to a second container it will connect. TCP/9006 is listening, and I notice when it connects (I did a packet dump) a second instance of blender is being launched. This only happens on the n+1 containers. Even when I switch containers I connect to, CR connects the first node, but the second and so on will not connect, and another blender is launched in the containers.
Containers: Ubuntu 18.04
Blender: 2.83.4
CR: 0.2.3_bl280 Client
Windows 10 pro
Blender: 2.83.4
CR: 0.2.3_bl280
Hi @Wolvenmoon Z :), yes please to those helm files. Always keen to learn something new, so looking forward to seeing what you have been working on and learning how they work to boot.
Ok, with respect to your suggestions, all very welcome and great feedback. Makes our lives somewhat easier to have real feedback from users as it stops us from wandering far from the path of good design.
Sounds like a good idea on the face of it. We're planning on replacing the current connection system though, which would hopefully either eliminate the manual connection step, or put it into an advanced mode where/if circumstances required a manual setup approach.
Another part of this redesign process would mean that clients would be listening, meaning you'd only need to configure a routable path to the client machine. Also since in your case the nodes would effectively be requesting from behind a NAT, then you would likely not have to configure anything since NAT's are designed to map outgoing requests to incoming replies without any need for intervention from the user.
We tested this and it worked, I was lucky enough to have someone in Switzerland setup a public IP that forwarded to their render server, they setup DNS and I was able to enter a URL into crowdrender as the name of the render node and it connected and rendered just fine. Of course the tricky part was setting up the correct port forwarding, we use 9000 - 9025 so those ports needed forwarding.
In the future these issues should mostly be taken care of and we'll be carefully considering whether we need to even have an advanced mode where you can control how things work with regards to the network.
If you have ideas, comments or even criticism, would love to hear them :)
James
@James Crowther @zocker1600
I'll get my helm charts cleaned up and posted! I did a lot of modifications within my Rancher web interface, so the yaml files I started with aren't up-to-date! They should be ready in at most a week or so! :)
The update to the docker repo no longer allows me to directly connect to my nodes by adding them manually and entering their IP addresses, but when I log in to Crowdrender it populates the list properly and shows them connected but displays my public IP, which does not have ports forwarded to the nodes.
I entered in the right IP addresses and told it to connect. It attempted to repopulate with my public IP, but they did connect and all three are showing sync'd and do participate in the render, now.
I did notice a lot of interface delay. I'm not certain of if it's a problem on my end or with the software, but hitting 'connect' has a long pause before it pops open its box to enter in values.
I have a few thoughts that you are absolutely free to cherry-pick from.
1. Handling of private and public IP addresses. Tracking them both via the cloud and allow users to click a "use LAN IP" or "Use custom IP" box in Blender.
2. Allow manual entry of port ranges per-node in Blender to allow for NAT. I.E. my network is on 192.168.1.0/24 but my render nodes are on 172.24.0.0/16 behind a router at 192.168.50.10. Being able to set "node1"@192.168.50.222:9000-9025 "node2"@192.168.50.222:9026-9050 within the interface would be greatly beneficial.
3. Host name resolution could help with suggestion 1, since, say, my node mirth(dot)wolvenmoon(dot)net resolves to 172.24.15.3 on the 172.24.0.0/16, but it resolves to 192.168.50.222 on the 192.168.1.0/24 network and, if I ever were to open it up to the Internet, would resolve to my public IP, there.
@James Crowther @zocker1600 I had this running as a daemon set (1 container/machine) binding to host ports on 3 different machines via Kubernetes, so, for example, the IP addresses and ranges bound were 172.24.15.2:9000-9025, 172.24.15.3:9000-9025, 172.24.15.4:9000-9025. If each node is getting a unique ID now, I don't anticipate any further issues w/ connectivity. I'll try to test later this week and if I have an issue, either reply here or re-open the issue on Github. Would you be interested in Helm charts once I verify mine are working? Thank you very much for your time!
Was this ever solved? I'm having what appears to be similar behavior on the zocker160/blender-crowdrender:bl_2.83_cu_10 container launched into a Kubernetes daemonset with host ports 9000-9025.
Thanks Matt, I can see your ticket, will be in touch soon :)
Will do thank you.
Hi Matt, any chance you can send us the logs from those machines? Best way would be to open a support ticket where you can upload them -> https://www.crowd-render.com/report-a-problem
:)