Page MenuHome

New feature: rotate object before exporting to 3d print
Needs ReviewPublic

Authored by Thiago Borges de Oliveira (thborges) on Fri, Jan 24, 2:37 AM.

Details

Summary

This patch adds a new feature to the 3D-Print Toolbox enabling a per-object rotation transform before exporting, as shown in the attached image.

This feature free us to orientate the objects in Blender, irrespective of their print orientation. When developing mechanical parts, for example, we usually want to maintain the whole piece set in their assembled locations.

Of course, we can turn the object in the slicer software. However, in Blender, it eases the iterative workflow and the object goes straight in the print orientation saving some clicks and simplifying the 3d print workflow.

Unfortunately, this patch brings back some code removed some commits ago related to the method merge_object. I don't know another way to rotate the object and maintain all modifiers correctly (e.g. boolean modifiers). Fell free to change it if there's a better way.

Diff Detail

Event Timeline

I do not know if it's up to me to decide, so here are my thoughts.

The feature is vague in its functionality and its UI, it rotates objects relative to their current orientation, which means it's basically over complicated "rotate 90 degrees" toggle and can be replaced by one, as I do not see use case for using any value but 90 degrees.
And if it does happen that user would have to enter some other value, than it is no more convenient than rotate tool.

The feature is unnecessary, exporting is not a frequent operation, user can simply use convert to mesh and rotate all parts at once.

The feature seem very specific and tied to your workflow, as usually parts are being printed in groups to optimize 3D printer efficiency and person responsible for printing is doing mandatory grouping and positioning anyway, it's he who decide how to orient and place details together.

On the implementation side the patch brings old (and unnecessary) workarounds.

The patch complicates export pipeline, and that is what I think should be avoided. Let's keep it simple and not introduce unnecessary complications.

Thank you for your thoughts!

I agree with your arguments for a user point of view which gets the parts already modeled. But I believe they miss the designer, that makes parts from scratch.

For me and I believe other makers out there, designing a part is more like an iterative process. It's not just hit export one time, rotate it in a slicer, and print. The part out of the 3d printer has tolerances and errors that we need to tweak. Some best practices apply but it's rare when the first printed part fits the purpose.

The feature is vague in its functionality and its UI, it rotates objects relative to their current orientation, which means it's basically over complicated "rotate 90 degrees" toggle and can be replaced by one, as I do not see use case for using any value but 90 degrees.
And if it does happen that user would have to enter some other value, than it is no more convenient than rotate tool.

Using the rotate tool misses the purpose here. The objective is to leave the part where it should be in the assembled mechanism. As an example, you can look at the gears in the image.

The rotation tool is very difficult to use when you have many boolean operations in the object that you don't want (even temporarily) to apply. It will deform the object if you didn't rotate all interdependent meshes AFAYK.

The feature is unnecessary, exporting is not a frequent operation, user can simply use convert to mesh and rotate all parts at once.
The feature seem very specific and tied to your workflow, as usually parts are being printed in groups to optimize 3D printer efficiency and person responsible for printing is doing mandatory grouping and positioning anyway, it's he who decide how to orient and place details together.

Even this person can benefit as the pieces will be exported as they should be positioned in bed. The designer knows better how to strengthen the part, reduce needs for support, I believe.

On the implementation side the patch brings old (and unnecessary) workarounds.
The patch complicates export pipeline, and that is what I think should be avoided. Let's keep it simple and not introduce unnecessary complications.

You are right. I don't know how to do it better when a mesh uses modifiers.

Anyway, although I would like to have it integrated, I can survive applying the patch in my own environment.

This feature is also violating "What You See Is What You Get" principle that all modern DCC strive for. Violating this could lead to users getting unpredictable results which could be costly in case of 3D printing.

Using the rotate tool misses the purpose here. The objective is to leave the part where it should be in the assembled mechanism...

That is a very specific use case that you have in your production environment. It would be best if you make a separate export add-on that will suit your needs.
I did exactly that, my add-ons support my needs and will never be included in Blender. It is not uncommon to have your own bunch of in-house tools.

Is the goal here to make sure that the object's two longest dimensions lie horizontally, and its shortest axis vertically, such that the object is as flat as possible? I don't know much about 3D printing, but that would make sense to me. (I'd want to 3d paint a gear cog lying down rather than standing up) If that's the case, that could be implemented as a single toggle button with some logic and maths underneath. But I could be totally missing the point here.

...that could be implemented as a single toggle button with some logic and maths underneath.

This kind of algorithm would not be able to distinguish between pyramid and cube.

...that could be implemented as a single toggle button with some logic and maths underneath.

This kind of algorithm would not be able to distinguish between pyramid and cube.

You mean that "logic and maths" are not able to distinguish a pyramid from a cube? That doesn't sound like you trust math much.

I agree with the overall idea that these changes in orientation should be visible to the user. I could envision something like a toggle switch between "Assembly Orientation" and "Print Orientation".

The add-on now basically performs two tasks:

  1. Allow different orientations/translations for viewing (in assembly context) and printing, and
  2. Join meshes together.

IMO it would make things clearer (both from a user's perspective and from a coding & review perspective) if those two are separated more. Exporting everything as one mesh makes kind of sense (I don't know if STL files can contain multiple objects in one file; if not, merging them before export makes total sense). However, from what I can tell this is only something you would do when exporting. If this is kept as a separate step (maybe as part of the exporter itself) the remaining functionality is reduced just toggling different orientations/translations. This is a lot easier to work with, as it just requires toggling between two matrices (and perhaps breaking/restoring parent-child relationships). It shouldn't be too hard to toggle between those.

Does this make sense?

You mean that "logic and maths" are not able to distinguish a pyramid from a cube? That doesn't sound like you trust math much.

I am talking about code complexity, if algorithm works purely in object mode then it is very simple and easy to implement (checking for obj.dimensions), but it wont distinguish pyramid from a cube.
In order for it to work it has to analyze mesh and that it is too complicated for the feature that is basically rotating objects, and even then it probably going to have false positives.

I don't know if STL files can contain multiple objects in one file

STL does not even support for continuous mesh surface, it only has array of disconnected faces. Yeah it's that bad.

If this is kept as a separate step...

Are you proposing for a transform tool that orients selected objects?

I am talking about code complexity, if algorithm works purely in object mode then it is very simple and easy to implement (checking for obj.dimensions), but it wont distinguish pyramid from a cube.
In order for it to work it has to analyze mesh and that it is too complicated for the feature that is basically rotating objects

I don't agree with the conclusion that anything that's more complex than looking at three numbers is too complex. Given that the meshes are 3D-printable, they are manifold, which means that you can compute a tetrahedralisation of the mesh and compute its centre of mass (CoM). You could then compute the convex hull and try different faces of that hull as "bottom" to find the lowest CoM. It's likely that computing the CoM of the convex hull is already good enough.

and even then it probably going to have false positives.

This is certainly true of any algorithm that tries to optimise things for you. This is also one of the reasons why I agree that the export-transform has to be shown to the user in the 3D View, and not just be a few numbers in a panel. I'm saying 'transform' instead of 'rotation' as it may be useful to translate the objects for printing as well as rotating.

Are you proposing for a transform tool that orients selected objects?

I'm proposing to have this patch do a few things in separate steps:

  • Allow toggling between "assembly transform" and "export transform", so that either one can be seen in the 3D viewport. This then allows the user to manipulate objects into their print orientation/position with the regular Blender tools.
  • Have some operator (maybe a wrapper around the STL exporter, maybe a separate operation you'd do before exporting) that joins all the meshes together into one big to-be-printed object.
  • Once this works, it could potentially be augmented by having things automated, such as finding the proper orientation for printing, or just simply moving all the objects so that their lowest vertex is at Z=0 (i.e. they're resting on the ground plane, even though it may be in a suboptimal/unbalanced orientation).

...compute its centre of mass (CoM). You could then compute the convex hull and try different faces of that hull as "bottom" to find the lowest CoM.

This would not work for signet ring meshes as their mass is concentrated on top along with somewhat flat face, that would be too convenient for algorithm to decide that it should be the bottom, but rings are always printed on the side at the angle.
And rings with complicated centerpiece would fail to position correctly too.

Here take a look:

It's likely that computing the CoM of the convex hull is already good enough.

It is not, cylindrical meshes with hole on one end would have 50% chance to false positive.
Signet ring and complex ring would have 100% chance to false positive.

Have some operator (maybe a wrapper around the STL exporter, maybe a separate operation you'd do before exporting) that joins all the meshes together into one big to-be-printed object.

This serves no purpose.

I think a separate transform tool would be a better workflow:

  • It rotates, positions and groups selected objects tightly together in a defined work area.
  • It should not be the part of export process, a separate transform operator.
  • That way user can correct individual objects if their orientation is not optimal.

Specialized commercial software already has similar functionality: https://youtu.be/isGHPQdU3II

@Mikhail Rachinskiy (alm) I already acknowledged that such an automated approach can fail in cases, and I suggested doing such automation as the last step in the development process of this feature (if at all). Manual control is paramount, of course.

Have some operator (maybe a wrapper around the STL exporter, maybe a separate operation you'd do before exporting) that joins all the meshes together into one big to-be-printed object.

This serves no purpose.

I'm just going along with what @Thiago Borges de Oliveira (thborges)'s code is already doing, which is merging meshes. I was assuming that this was done for a reason.

I think a separate transform tool would be a better workflow:

I don't see the reason why this should be a separate transform tool, but maybe we're using different terminology. With "transform tool" I mean the existing Blender tools for grab, rotate, and scale.

I think we're both suggesting the same thing: the ability to separate the assembly loc/rot from the print loc/rot, and being able to organise the objects in either mode. For this organisation I would suggest using Blender's existing transform tools, and not create new ones.

...code is already doing, which is merging meshes. I was assuming that this was done for a reason.

The code is an old workaround for PLY exporter, when it wasn't able to export more than one object. Later PLY exporter got the feature and the workaround wasn't needed anymore and has been removed.
In this patch that workaround code is repurposed to apply modifiers to exported object before transforming them. Merge is just a part of that old code and is unnecessary.

the ability to separate the assembly loc/rot from the print loc/rot

That is interesting feature that might be useful in certain pipelines.

In Blender this feature is rather unnecessary, every 3D printer comes with a slicing software that should have packing functionality (both manual and automatic transform), and user is going to import their STL/PLY files there anyway.

Hi @Sybren A. Stüvel (sybren)!

  • Allow toggling between "assembly transform" and "export transform", so that either one can be seen in the 3D viewport. This then allows the user to manipulate objects into their print orientation/position with the regular Blender tools.

I agree that alternating between assembly and export transforms will be the way to go. To be possible, I believe we also need a set of translate parameters per object.

Does blender have a feature that eases implementing this? I thought of a new scene with linked objects, but not sure if it is the way to go.