(Note: I'm relatively new to blender, so may be missing some very obvious points!)
I'm looking at setting up a small internal farm that might be used in a classroom situation.
We have a couple of Windows PCs and a couple of Linux VM servers that could be utilized. Students will use a mix of Mac, Windows, and Linux frontends; possibly even Raspberry Pis...
Given the docker containers, it looks to be really simple to set up CPU, possibly even GPU, accelerated containers on Linux... and that's also where I have the most experience.
Is there any instructions/wrappers for setting up Blender + CR as a windows service? They're often used for other, less taxing tasks; so blender needs to be running out of (any) sight, in the background.
In terms of file access and permissions, I'm planning on setting up limited access "CrowdRender" users that have read-only access to a network share, and setting it to be auto mounted on windows with this specific user. Same user would run the Blender + CR headless service.
For render nodes, is read only enough?
And if users save their blend file on this network share, and structure their textures and other external assets to be in sub-directories relative to the blend file, should this work, even cross OS?
Or does the blend file need to (or just a lot easier to) be set to "automatically pack into blend"?
If relative paths don't work, is there a path mapping feature- eg way to use regex to map "x:\shared\assets\..." to "/mnt/shared/assets/..."?
(We discovered the lily surface scraper for getting and setting up textures. It's awesome, but it seems the relative paths-to-textures is either broken in the plugin, or blender... they all went purple when we moved the directory containing the project (and all assets) to a network share. Just discovered the `file->external data` menu; will have to play with that later and see if it fixes things.)
Thanks in advance, just stunned at what even an absolute beginner like me can achieve, especially with the new 2.8 GUI... but now want it faster ;-)
Honestly, I'd use it as an interim solution if it meant other core/key parts reached stability faster.
I'd try it, because in our small lab setting with lower end machines, I could see running two instances of blender on each student's machine; one for the user sitting in front of the machine, and one for the pool. Plus instances on some other machines dotted around the workshop. In my experience so far, students rarely end up at the same task at the sametime- lots of staring at the screen, looking up YouTube videos etc. So it could work out as a nice accelerator to get ~100 frame tests done faster. Rapid retry loops are great when learning!
In a limited class for kids, or beginner adults, they're not going to push the machines RAM etc, but it's all about getting the render times down.
Because I'm an old fashioned computer nut I sort of enjoy proving technology can do X/using it to it's fullest/longest life. (I grew up across the ditch from where I assume you are- wrote basic code on graph paper and simulated a computer executing it on the opposite leaf for a year or so before I got to see a real ZX81/BBC/Vic20/DSE computer in my rural NZ hometown.)
But in the long term, I'd really like to see a way to structure a project so that it's properly portable across OSs, and to be able to provide mappings (at OS environment, or in Blender) to be able to find ALL the assets.
I'm still convinced some sort of variable expansion and all paths being URIs would be the right way to go. i.e. be able to have a list of user definable variables that can be used in all asset paths. "//" might be "relative to blender file". Set an ASSET_BASE variable properly and use "${ASSET_BASE}/textures/chair/bump.exr" and so long as you copy the right files to your local drive, you can work from home on any OS. Switch it back to the office network NAS root for assets, and carry on at work (on a different OS workstation.) And while I'm dreaming, these expansion variable should be stored in such a way that you can have environments. Either as separate files from the blend file, or as an array inside the blend file. I'd prefer a separate file, 'cause then it's super easy to share it with a team, or share updates (eg shares change name- just update this file...)
Then Blender/CR would work the same as now for single users, but be really easy to grow/stretch/tweak as needed.
( I'm basing this on how I worked with systems for developing, compiling, and testing code using different developer machines, cross compiled to multiple targets.)
I've been working with Meshroom for some hobby, and hopefully paying, jobs. Just got the latest trunk compiling on Windows. It works via a pipeline- you have access to each step. I'm only just starting to get my head around it. It's as simple as Zephyr3D or Metashape in the default case, but the amount of control it gives if you want to expand, or experiment, is astounding. There are hints that it can be set up to use a renderfarm... take a look at this demo/tutorial: https://youtu.be/1dhdEmGLZhY And this one https://www.youtube.com/watch?v=BwwaT2scoP0 (skip the last few minutes- first part is installation details.)
My mind boggles at what creative people could do if the setup of sharing files across pipelines became trivial.
e.g. you and your mates have a couple of desktops and laptops. You setup one of the desktops as the flat "server." You all agree to run docker, and use kubernetes/rancher/anyone of the container composition tools. You download a pre-made basic config (which uses provided docker containers for blender, Meshroom + FireWorks, a FireWorks coordinator, etc.) and you have a simple to deploy DIY small scale photogrammetry and renderfarm(!)
If we could break the barrier between "friendly" OSes (where people want to do interactive work) and the "server" OSes (that have the really good tools for setting up compute grids) for this sort of work I think it'd potentially open up some opportunities for aspiring creatives.
Sorry, I'm going on and on, I just really like simple standards that are flexible and not tied to only one set of OSes/tools/implementations :-)
Hi James- some news I saw yesterday that might make cross-platform issues easier: https://www.phoronix.com/scan.php?page=news_item&px=Linux-GUI-Apps-GPU-WSL2 - MS has announced they will be supporting GPU accelerated Linux apps in WSDL.
So for crazy people like me, should be able to run the Linux version of Blender on Windows boxes with hopefully only a small performance penalty. Then all the file path specs will match up.
Interesting times!
This all makes sense, along with a healthy handful of assumptions ;-)
We have a separate FreeNAS based NAS, and for a lab/production, I'd imagine that's the way to think this through.
My 2c would be to leave the setting up of shares etc outside of CR; IT policies, already existing shares, making assumptions on network/NAS topologies, etc. Also, then a CR service doesn't need elevated privileges to be able to set this up at an OS level. Not to mention all the testing re taking down shares if things fall over, etc, etc.
Detecting missing files and sending to the clients is the simplest for the end-user, but my gut tells me that (except for longer animations) the cost of sending the files, multiple times across the network, is going to cost more than CR gains.
One possibility might be something like an embedded BitTorrent system: eg http://www.bittornado.com looks to be MIT licensed and in python. But in some experiments I did years ago to speed up distributing multi-gig source files to a dev site of developers, the cost of computing the required hashes and distributing them ate most of the block distribution speed-ups.
Maybe there are two use cases to consider:
Absolute beginner: CR detects and ships all files to clients. Effective speed up limited a lot due to transfers
Advanced/production: CR takes care of (relative) path fixes based on OS, and requires everything be on a Network share.
For the advanced case, could CR just send the location of the (master) blend file on the network to the nodes? Then it wouldn't need to sync the blend file at all. I'm assuming blend files (with all assets external) are relatively small... then nodes could load the same file the master has. Or if write-lock/file changes are a concern, each node could make a copy of the master-blend file, tweak it as necessary (eg OS specific path tweaks), and execute it in the same place as the master.
Again, I'm thinking about leaving the file sync/transfer to something that is hopefully well optimized, and also as flexible as the end user needs. E.g. with this setup, in theory a team could set up a pre-batch rsync job to sync all assets to SSDs local to each slave, and then the master coordinates the rendering. Maybe for animation rendering CR could switch to frame-level batching to different nodes; optimize for latency (tile based) for single images, optimize for throughput (frame based) for animations.
Again, I'm day dreaming with no real experience in the field, just some ideas based on imagination + experiences in other industries (at various points in the past have been a customer support dev, app dev, test dev, devops for Motorola & Nokia; sometimes large teams needing lotsa files fast, and I got to do some experiments outside the standard systems IT had deployed- thankfully with their support :-) )
"Unfortunately" one of my potential helpers has just built a new PC- it's a lot faster than their old machine... so their interest in this is diminished.
But the new-shiny will wear off, and "fast" is a relative term that we soon shift to taking for granted.
I'll try and make sometime to play with these ideas in the next few weeks!
Hi Julian,
Hmmm, yeah, I think the info about GPU in headless might have been from before we fixed something. We found that users could get it to work regardless though, so long as they had their GPUs enabled on each node in blender's system settings. Don't know why you wouldn't have your GPU enabled as you can toggle GPU rendering in the render properties panel which is easier to get to.
Yeah, the UNC paths are set on the master by linking assets from a shared folder. Works on windows just fine, linux, unix (macOS included) no go unfortunately.
We're working on potential solutions to the issue of external assets, what we're aiming for is a solution that only transfers files to a node if there's no other way, so looking at how relative paths could be used is definitely an option.
There is a slightly better way I have though of that is not as tedious as manually transferring files or hacking paths. After reading your reply above, i thought about the use of symlinks. Here's what I think you could try.
STRUCTURE YOUR PROJECT WELL
On the master you have your project folder which contains everything, your blend file and an 'assets' folder, which you'll want to create if you don't have one. It contains everything else.
project_folder
|_ your_project.blend
|_ assets/
|_ textures/
|_ cache/
|_ models/ (other blend files)
You may need to modify your project, remapping assets to use the assets folder instead of being, well, anywhere else. Having this structure makes the later steps much easier.
2. SETTING UP THE SHARED FOLDER/DRIVE
If you can setup a shared folder, then we can locate your projects there. This will provide access to the rest of the network to those resources. We'll use relative paths for everything since the project is organised such that the blend file sits in the root directory of the project where everything lives.
Assuming your master is a windows workstation, we'll be setting up a shared folder and this will be shared from windows using the UNC style path, which will look like this
\\HOSTNAME_WORKSTATION\path_to_projects.
Of course you'll need to substitute your actual host name and path.
When you locate your assets like this, if you're modifying an existing project, you'll want to make sure you use relative paths, and move the whole project to the shared folder if it isn't already. You shouldn't have to modify your project much, save for creating an 'assets' folder to put all the stuff that is not packed in the blend file.
2. SETTIN UP THE NODES (LINUX)
Since the project will use relative paths, what we need to do is setup a folder that maps to the \\HOSTNAME_WORKSTATION\path_to_projects path. We'll cover the case of linux here.
According a completely trustworthy guide I got off the internet (recommended you check this out as I am skipping bits here for brevity's sake - https://www.howtogeek.com/176471/how-to-share-files-between-windows-and-linux/), the appropriate way to do this is to first create a directory on your render node, lets call it /home/user_name/windows_master_projects.
Once that folder is created, you should map the shared volume on windows to this folder using mount
sudo mount.cifs \\HOSTNAME_WORKSTATION\path_to_projects\ /home/user_name/windows_master_projects -o user=your_windows_username
This mounts a folder which should contain the contents of your windows shared path
\\HOSTNAME_WORKSTATION\path_to_projects path.
Now for the fun part. Obviously the path we have mapped to is different than what paths are stored in the blend file
\\assets is what is in the blend file, which is very different to /home/user_name/windows_projects/project_folder/assets
which is where the files for the project are available on the render nodes. This will give you pink bits... not nice.
So we first need to make sure that we provide the assets folder in the same relative position as it is on the master.
To do this we use a symbolic link. This makes a link between the assets folder in
/home/user_name/windows_projects/project_folder/assets ...
...and the location of the replica of your master's blend file, which is usually located in a path that looks like this
/home/user_name/cr/server/UUID/
(you'll have to find the right UUID, remember the trick I mentioned about doing a quick test render to change the modified date/time of the UUID folder you're looking for?)
Doing this will make the following paths appear as if they are in the /home/user_name/cr/server/UUID folder, and so will replicate the same structure as the master has.
The command to make a link is
ln [-Ffhinsv] source_file [target_file]
so we do
ln -s /home/user_name/windows_projects/project_folder /home/user_name/windows_projects/project_folder/assets
Now effectively the project folder on the render node looks like this
/home/user_name/windows_projects/project_folder/assets
with the assets folder linked to the share via the trickery I just described.
Now all that is left is to experiment to see if blender can handle translating the slash direction, remember that if your project was on windows originally, then if we have a a texture like mytex.jpg it will be in a folder on your master that looks like this to blender
//assets\textures\mytex.jpg
now that we're on the render node, in linux, the path on the operating system is now
//assets\textures\mytex.jpg
so we're relying on blender here to translate the slash correctly. I haven't tested that yet. But i am hopeful, after all I have downloaded the classroom project onto my windows and macOS and linux machines, and it works on each one, using the same downloaded file, inside that file it refers to
//assets/textures.... etc
so as far as I can tell, its doing a fine job of dealing with the differences between '/' and '\'
So in summary,
structure your project so it has an assets folder
place the project in the shared folder, one you can use for any number of projects
Setup your render nodes with a mapped drive
linking the crowdrender cr\server\UUID folder of your project to include the assets folder you created via 3.
Phew! Thats a lot to take in, but I think this could work much better since the assets are now updatable, if you change textures, models, simulation cache, they are now updated automatically at render time with no need to manually transfer files to each render node by hand, or having to worry about hacking the svr.blend file paths (for which there could be literally hundreds in a big project).
Let me know if you think this is a 'sane' approach. If its usable, we could even support this in software somehow, maybe by letting you specify the mounted share volume from the crowdrender UI instead of having to go make the symbolic links yourself. Or we could even go one step further and automatically create the share and links for you.
All the best
James
Great news on all fronts! I'd read somewhere that blender running headless can't access the GPU. Most likely old/wrong/apocryphal info.
Our email host is having problems today, so I may not get the emails until tomorrow. Looking forward to trying the newer version, and early access :-).
Tile batching will be very cool, especially if it can be a bit "pathological" and potentially scatter tiles from entire frame-ranges to nodes for animations, and then gather them all back into frames.
I've watched (parts of) that video a couple of times, its really good, and I appreciate the "uncut/honest" nature of it, but I'm old-fashioned and prefer a text document, especially when it comes to code (snippets.) (Am also lazy, and prefer to cut and paste text cf typing it from a paused video ;-) ).
In the video it looks like you're using hardcoded absolute UNC paths for all external assets, which isn't going to work cross platform.
Hence my interest in understanding the use of relative paths... if they even work in blender properly; we did a simple blender only test with the cube & a texture and simply moving it to a new directory... failed (pink textures.)
So given what you've said re the UUID/local copy, even if relative paths are working in Blender, they'll be broken UNLESS the external files are in the same relative location to the temporary blend file on the remote node.
Could you provide a "network base path" (NBP) option on each node?
This would be a start-of-string match that would be removed from the path when the master node sends data, and added on remote nodes?
e.g. Given the blender file is at //share/projects/blender/projectX/projectX.blend on the master, but at //readonly/projectX/projectX.blend on the remote node(s)
and NBP on the master = //share/projects/blender and the NBP on the node(s) is //readonly/ then the master would send the relative base as projectX, and the remote nodes would "reconstitute" this as //share/projects/blender/projectX.
Then if remote nodes set their current working directory to //share/projects/blender/projectX, then relative assets should be accessible...?
And if the MNBP is left blank (on all nodes) you have the current behaviour.
If I'm very, very lucky, Blender is "\" vs "/" tolerant and this would work cross-platform.
Thinking of this with very general knowledge and very little specifics- won't be offended if you have to tell me I'm missing enough info to drive a fleet of internet packet delivery trucks through!
Hi Julian,
Since you're a supporter, you will be getting early access to the new system so if you are keen to test, then look out for notifications in your inbox about the new software when its ready for testing.
As for GPU rendering, we support GPUs in headless mode for cycles, I think also for EEVEE, we certainly have not un-supported it if that makes sense! Eevee does work in a different way to cycles, but we simply call the rendering engine to render and wait for a tile, so it should work, though the delay in sending data across the network might make it slower using our addon until we can change to the new system which will assign a bunch of tiles to each node and let them rip through them. That will likely make eevee rendering faster, we hope!
Packing assets will work in most cases unless you are packing linked or appended assets. In those cases, you need to use the hidden, pack libraries command, use F3 to find it. Also if you have a linked or appended asset, here's the painful part, you must open it and pack its assets (textures and so on) otherwise you'll get only the geometry that's actually in that file and nothing else.
I go through that procedure here https://youtu.be/ttZVSYKFcgE
Remote nodes use a special UUID like path for each project. When your project is synced to a render node it will be replicated to a path like
\Users\user_name\cr\serer\UUID\svr.blend
the svr.blend is the replicated copy of the file containing the currently open file you are working with on the master.
Good questions, keep firing them at us :)
James
Awesome replies James!
Headless with access to the read-only path seems like a reasonable interim solution if you're working on a service wrapper.
I don't have tonnes of time, but if you're looking for some off-and-on testing/feedback re the Windows service setup, I'd be interested in helping. For Windows, we have a VR capable workstation, and 3x i5 2D/basic 3D/video stations.
(I've set up windows services for automated tests in the distant past... not that keen to go down that path by myself again ;-) )
Just want to check, headless precludes GPU rendering, correct? So that would be Cycles only (no Evee.)
Re paths: Requiring all relative, with a light "is it accessible" check & warning seems the way to go. Changing file paths in the blend file could break things for non-CR rendering. And it would be another API point you have to track, etc.
We did the "set all externals to relative" operation from file->external data, got a clean report (no absolute paths, no changes needed) and it didn't work. Lots of pink. Set it to 'pack assets' and it worked- mostly; another post coming... (BUT I just realized I may have mapped the shared drive inconsistently on the other machines- I need to test that again.)
For remote nodes, what do they use as the path for the replicated blend file? ie what is the "current working directory" on a remote node when looking to load relative paths to assets? Maybe this could be a setting to allow some basic mapping between remote and master node path locations?
And if you need help with cross-platform testing, we have a Linux multiuser box with dual GPUs (has run 14 workstations with 14 instances of Minecraft, thanks to VirtualGL), an ESXi dual Xeon multiuser (multiple GPUs using VM hardware passthrough- runs Windows & Linux VMs) and can also add my personal OS X laptop. Have a FreeNAS box with CIFS shares accessible to all the above.
Final note- we are using V0.2.2-BL280.
Thanks for creating this, it's pretty amazing!
Julian
Hi Julian, wow, quite an interesting project, and a lot of detail, I'll do my best to advise :)
Ok, setting up CR as a windows service, currently we're doing just that with our new system we're building, quite spooky really, you read our minds :). It might be possible with some scripting knowhow to setup the current system as a windows service, we've been through the pain of doing just that by using a python library, pywin32, to allow us to experiment with installing crowd render as a windows service.
We did all this experimentation so we can give you guys an installer, much like for any professionally engineered application, that 'just works' and installs the new system as a windows service plus add-on for Blender. This will no doubt be more like what you're looking for as it would not be necessary to start blender or anything else on a node for it to be ready and waiting to render for someone on the network, all that will be required is for the software to be installed and the computer on and logged in to a user that has the crowdrender system installed. That new system will be called CRESS (Compute RESources System).
But back to your situation, if you're wanting to get something up and running sooner, then for sure, we can give some guidance as to the process we went through to install the current system as a windows service.
If that is too much the next best thing is headless mode, there will be a command prompt window open on each node, but asides from that, blender will not be running in the foreground. This setup can be started from a .bat file in your startup folder, so that as long as the logged user has that bat file in their startup folder, then the computer will be available as a render node when it is logged in.
As for the network share, read only is sufficient, I use read-only in my personal experiments and when streaming, you can see the setup I use here https://youtu.be/TOUSCaduE1s
There is one drawback, and you've spotted it already, path mapping between windows and nix based OS types is non-existent in blender and this is the root of the problem when it comes to cross OS operation. Sadly blender stores the paths of your assets as a simple string. This string is set on the Master OS. Since the string is formatted by the master machine's OS, it will appear as such on all the nodes, including those with a different OS, and blender does not attempt any translation at all so a windows based path will not be recognised on a linux OS and cause the node to fail loading the asset for that path.
There are a couple of ways to solve this, one would be replication, where the files are copied to each node and file paths are mapped to be relative to the blend file on each node. The other method is path translation, which requires an addon to go through each asset's path and translate it to one compatible with the host's OS.
As you can imagine, this is somewhat a challenge when it comes to linux vs windows vs mac since although they can all access a shared location, they all do so in different ways, linx/mac will mount a volume, and give it a particular name, similar on windows but then the path format is different.
We're working on how we could solve this, which would likely be an upgrade to the existing addon which would check each asset's path, if the asset can be opened and loaded then our addon would leave the path as is, otherwise it would sync the file to that node, meaning it would transfer it. Obviously this needs some more thought and a lot of work since the file can get stale if its updated, and now we have the issue of having to monitor files on the master for changes, which adds further complexity and risk that the assets will not be correct, something that no body wants.
Anyway, I hope this helps :), its always a pleasure to get to hear about and maybe even help out with projects like this, I like that you're doing this in an educational setting too!
Wishing you all the best and please write back
James