(Note: I'm relatively new to blender, so may be missing some very obvious points!)
I'm looking at setting up a small internal farm that might be used in a classroom situation.
We have a couple of Windows PCs and a couple of Linux VM servers that could be utilized. Students will use a mix of Mac, Windows, and Linux frontends; possibly even Raspberry Pis...
Given the docker containers, it looks to be really simple to set up CPU, possibly even GPU, accelerated containers on Linux... and that's also where I have the most experience.
Is there any instructions/wrappers for setting up Blender + CR as a windows service? They're often used for other, less taxing tasks; so blender needs to be running out of (any) sight, in the background.
In terms of file access and permissions, I'm planning on setting up limited access "CrowdRender" users that have read-only access to a network share, and setting it to be auto mounted on windows with this specific user. Same user would run the Blender + CR headless service.
For render nodes, is read only enough?
And if users save their blend file on this network share, and structure their textures and other external assets to be in sub-directories relative to the blend file, should this work, even cross OS?
Or does the blend file need to (or just a lot easier to) be set to "automatically pack into blend"?
If relative paths don't work, is there a path mapping feature- eg way to use regex to map "x:\shared\assets\..." to "/mnt/shared/assets/..."?
(We discovered the lily surface scraper for getting and setting up textures. It's awesome, but it seems the relative paths-to-textures is either broken in the plugin, or blender... they all went purple when we moved the directory containing the project (and all assets) to a network share. Just discovered the `file->external data` menu; will have to play with that later and see if it fixes things.)
Thanks in advance, just stunned at what even an absolute beginner like me can achieve, especially with the new 2.8 GUI... but now want it faster ;-)
Honestly, I'd use it as an interim solution if it meant other core/key parts reached stability faster.
I'd try it, because in our small lab setting with lower end machines, I could see running two instances of blender on each student's machine; one for the user sitting in front of the machine, and one for the pool. Plus instances on some other machines dotted around the workshop. In my experience so far, students rarely end up at the same task at the sametime- lots of staring at the screen, looking up YouTube videos etc. So it could work out as a nice accelerator to get ~100 frame tests done faster. Rapid retry loops are great when learning!
In a limited class for kids, or beginner adults, they're not going to push the machines RAM etc, but it's all about getting the render times down.
Because I'm an old fashioned computer nut I sort of enjoy proving technology can do X/using it to it's fullest/longest life. (I grew up across the ditch from where I assume you are- wrote basic code on graph paper and simulated a computer executing it on the opposite leaf for a year or so before I got to see a real ZX81/BBC/Vic20/DSE computer in my rural NZ hometown.)
But in the long term, I'd really like to see a way to structure a project so that it's properly portable across OSs, and to be able to provide mappings (at OS environment, or in Blender) to be able to find ALL the assets.
I'm still convinced some sort of variable expansion and all paths being URIs would be the right way to go. i.e. be able to have a list of user definable variables that can be used in all asset paths. "//" might be "relative to blender file". Set an ASSET_BASE variable properly and use "${ASSET_BASE}/textures/chair/bump.exr" and so long as you copy the right files to your local drive, you can work from home on any OS. Switch it back to the office network NAS root for assets, and carry on at work (on a different OS workstation.) And while I'm dreaming, these expansion variable should be stored in such a way that you can have environments. Either as separate files from the blend file, or as an array inside the blend file. I'd prefer a separate file, 'cause then it's super easy to share it with a team, or share updates (eg shares change name- just update this file...)
Then Blender/CR would work the same as now for single users, but be really easy to grow/stretch/tweak as needed.
( I'm basing this on how I worked with systems for developing, compiling, and testing code using different developer machines, cross compiled to multiple targets.)
I've been working with Meshroom for some hobby, and hopefully paying, jobs. Just got the latest trunk compiling on Windows. It works via a pipeline- you have access to each step. I'm only just starting to get my head around it. It's as simple as Zephyr3D or Metashape in the default case, but the amount of control it gives if you want to expand, or experiment, is astounding. There are hints that it can be set up to use a renderfarm... take a look at this demo/tutorial: https://youtu.be/1dhdEmGLZhY And this one https://www.youtube.com/watch?v=BwwaT2scoP0 (skip the last few minutes- first part is installation details.)
My mind boggles at what creative people could do if the setup of sharing files across pipelines became trivial.
e.g. you and your mates have a couple of desktops and laptops. You setup one of the desktops as the flat "server." You all agree to run docker, and use kubernetes/rancher/anyone of the container composition tools. You download a pre-made basic config (which uses provided docker containers for blender, Meshroom + FireWorks, a FireWorks coordinator, etc.) and you have a simple to deploy DIY small scale photogrammetry and renderfarm(!)
If we could break the barrier between "friendly" OSes (where people want to do interactive work) and the "server" OSes (that have the really good tools for setting up compute grids) for this sort of work I think it'd potentially open up some opportunities for aspiring creatives.
Sorry, I'm going on and on, I just really like simple standards that are flexible and not tied to only one set of OSes/tools/implementations :-)