Page MenuHome

SLIM Unwrapping algorithm
Confirmed, NormalPublicDESIGN

Description

In the course of my bachelor thesis I get to implement a new parametrization algorithm into Blender. The Algorithm is described in this paper:

http://igl.ethz.ch/projects/slim/slim-techreport-2016.pdf

Compared to the current LSCM algorithm it works by minimizing a different energy which results in considerably less stretching. It works in several iterations, which allows for early termination if good enough (Can potentially be done similarly to the current "minimize stretch" GUI interaction), see UV sample:

https://drive.google.com/folderview?id=0B8tVvh1f2GtPemVKNFY1Si0zS2M&usp=sharing

Two things need to be figured out:

  • The algorithm only works on disk-topologies (Should i simply close all holes except the biggest one as part of initialization? What size-measure?)
  • When to terminate (Will a global energy-threshold do? User set? Always interactive to be ended by satisfied user?)

I'm developing on MAC OSX with XCode. The Algorithm has already been (relatively cleanly from what I've seen so far) implemented but it requires C++11. I've therefore run into compilation issues on OSX.

After i get an initial implementation working, I will try to improve some parts. I will try different solvers in the local stage of the algorithm. Namely, we try to switch from a direct to a iterative approach, hoping for a considerable speed up. After that we will try to use momentum functions to improve the local stage.

The Algorithm uses two libraries apart from EIGEN. One is from IGL (Interactive Geometry Lab ETH). It uses a GPL License. The other one is Pardiso. It contains a apparently vastly superior solver. The algorithm works just fine with just EIGEN, but it is considerably slower. However, pardiso uses a commercial license i think and is only free for students or educational institutions. We'll probably have to do without it. It's not that big of a deal though unless you unwrap geometry whichs number of polygons is in the millions.

Timeline:

The Thesis ends on the 21. of September.

I would like to get an initial raw implementation working soon, preferably within the next week. A more refined implementation with decent GUI interaction I plan to have implemented until the end of April. After that come the already mentioned improvements.

I am grateful for any help and inputs!

  • The following is the diff of the code i have so far:

Event Timeline

Aurel Gruber (AurelGruber) claimed this task.
Aurel Gruber (AurelGruber) raised the priority of this task from to 90.
Aurel Gruber (AurelGruber) updated the task description. (Show Details)
Aurel Gruber (AurelGruber) edited a custom field.
Aurel Gruber (AurelGruber) updated the task description. (Show Details)

Pardiso can't be included in Blender, it doesn't seem to be available under an open source license compatible with the GPL. So it looks like Eigen will have to be used, which seems fine since we prefer not to add extra library dependencies whenever possible.

Regarding IGL, we'll have to evaluate if it's worth adding this entirely library as a new Blender dependency or not. Is this the code that you will integrate?
https://github.com/MichaelRabinovich/Scalable-Locally-Injective-Mappings

From what I can tell the mesh is represented by sparse Eigen matrices, and most of the IGL functions used seem to be pretty stand-alone. I didn't look at it in detail, but perhaps an option would be to just copy the code of a couple of IGL functions. If it turns out we only need 1K or 2K lines of code from IGL, it's not really worth adding the entire thing as an external dependency.

  • The algorithm only works on disk-topologies (Should i simply close all holes except the biggest one as part of initialization? What size-measure?)

For LSCM / ABF we close all holes except the biggest one, which is simply based on the total length of the edges along the boundaries. Probably something smarter is possible, but I've never heard any user complain about the current method.

Where did you integrate the code? It might be a good idea to hook into the current UV unwrap code in uvedit_parametrizer.c, since it does seam splitting, hole filling, triangulation, and basically all the work necessary to ensure you have disk topologies.

  • When to terminate (Will a global energy-threshold do? User set? Always interactive to be ended by satisfied user?)

It depends on performance. Most of the time UV unwrap is used on manually modelled or retopologized meshes, which aren't as big as the ones in the paper, and ideally the algorithm is fast enough for those that the tool can use some reasonable default threshold that users almost never need to modify. The tool could then integrate with functionality like live unwrap and pinning, where the UV unwrap is automatically recomputed as you set seams or move pinned UVs.

For handling bigger meshes, or if the algorithm is slow also for smaller meshes, it's probably worth making the tool also interactive. Perhaps it's reasonable to make it automatically stop at some threshold, while also allowing the user to stop it earlier manually.

Brecht Van Lommel (brecht) lowered the priority of this task from 90 to Normal.Apr 3 2016, 12:26 AM

Regarding IGL, we'll have to evaluate if it's worth adding this entirely library as a new Blender dependency or not. Is this the code that you will integrate?
https://github.com/MichaelRabinovich/Scalable-Locally-Injective-Mappings

From what I can tell the mesh is represented by sparse Eigen matrices, and most of the IGL functions used seem to be pretty stand-alone. I didn't look at it in detail, but perhaps an option would be to just copy the code of a couple of IGL functions. If it turns out we only need 1K or 2K lines of code from IGL, it's not really worth adding the entire thing as an external dependency.

Copying just the required functions makes sense.

Where did you integrate the code? It might be a good idea to hook into the current UV unwrap code in uvedit_parametrizer.c, since it does seam splitting, hole filling, triangulation, and basically all the work necessary to ensure you have disk topologies.

Hooking into the current code is what i had in mind as well.

It depends on performance. Most of the time UV unwrap is used on manually modelled or retopologized meshes, which aren't as big as the ones in the paper, and ideally the algorithm is fast enough for those that the tool can use some reasonable default threshold that users almost never need to modify. The tool could then integrate with functionality like live unwrap and pinning, where the UV unwrap is automatically recomputed as you set seams or move pinned UVs.

For handling bigger meshes, or if the algorithm is slow also for smaller meshes, it's probably worth making the tool also interactive. Perhaps it's reasonable to make it automatically stop at some threshold, while also allowing the user to stop it earlier manually.

I guess we can decide on that only when we see how exactly it behaves once a basic implementation within blender works.

Now i still have issues with compilation on mac:

As you told me, blender can't be compiled with c++11 on mac just yet. Is it the same for Windows and MSVC? Does it only work on linux?

I thought about rewriting the SLIM code to be pre-c++11 compliant, by including tr1 from boost. The problem is, that so far I've failed at creating the appropriate cmakefiles to even use tr1. Are there any good tutorials on that? I've looked around, but most only cover the basics. Blender has a pretty complicated cmake setup, at least to my beginnerish understanding.

Thank you for your help!

Compiling with C++11 on Windows and Linux should be easier. For Windows there isn't anything to turn on, all the C++11 features are there by default. On Linux enabling C++11 is probably easy as well but I haven't tested it recently.

What does seem to work on Mac is enabling C++11 for a single module and linking against both

1diff --git a/CMakeLists.txt b/CMakeLists.txt
2index f05e968..46ac739 100644
3--- a/CMakeLists.txt
4+++ b/CMakeLists.txt
5@@ -2078,9 +2078,9 @@ elseif(APPLE)
6 )
7 mark_as_advanced(SYSTEMSTUBS_LIBRARY)
8 if(SYSTEMSTUBS_LIBRARY)
9- list(APPEND PLATFORM_LINKLIBS stdc++ SystemStubs)
10+ list(APPEND PLATFORM_LINKLIBS stdc++ c++ SystemStubs)
11 else()
12- list(APPEND PLATFORM_LINKLIBS stdc++)
13+ list(APPEND PLATFORM_LINKLIBS stdc++ c++)
14 endif()
15
16 set(PLATFORM_CFLAGS "-pipe -funsigned-char")
17diff --git a/intern/iksolver/CMakeLists.txt b/intern/iksolver/CMakeLists.txt
18index 67e7aa6..499d91d 100644
19--- a/intern/iksolver/CMakeLists.txt
20+++ b/intern/iksolver/CMakeLists.txt
21@@ -46,4 +46,6 @@ set(SRC
22 intern/IK_QTask.h
23 )
24
25+set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -stdlib=libc++")
26+
27 blender_add_lib(bf_intern_iksolver "${SRC}" "${INC}" "${INC_SYS}")

This works because the iksolver module I tested this with doesn't link with any other C++ modules or libraries, which I guess is the case for SLIM as well. It might not be acceptable for merging into master, but it could be helpful to get things compiling now.

Thanks! This actually works ;) Didn't think i'd get it to run on mac!

Hey, It's been quite around here, but now I'm back at it. I tried to create a first primitive implementation. I did this without discussing any architectural details here, simply because i had to get used to working with c++ and all these cmake files and stuff. I'll of course gladly listen to any advice/critique you may have and follow it, knowing that this version will probably be overthrown completely.

My problem now is, that although the implementation works, it crashes in the release build. I don't know how to approach this kind of problem, it seems quite tricky.

How can i best upload a .diff s.t. it appears in this thread? For now, i just added it to the description.

Also, the repo that holds our SLIM code is here:

https://gitlab.com/AurelGruber/SLIM_code

Thanks

CMAKE_BUILD_TYPE set to RelWithDebInfo can be used for debugging release builds. If it still doesn't crash then, I guess you will have to do printf debugging to narrow it down, or use a tool like valgrind or clang address sanitizer to find errors.

You can upload patches if they are ready to be reviewed or tested here:
https://developer.blender.org/differential/diff/create/

Regarding the patch, I would suggest to implement this inside param_lscm_begin, param_lscm_solve, param_lscm_end, and perhaps rename them to param_unwrap_* or something like that. That way it will work with live unwrap too. Other than that I the integration code required looks quite minimal. I don't have much to comment on that, the code style could be made to match the other code in that file, but that's not important at this stage.

The gitlab link gives me a 404.

We now have C++11 libraries for OS X, so no special hacks should be needed anymore. You can set WITH_CXX11=ON and checkout these precompiled darwin libraries instead of darwin-9.x.universal:
https://svn.blender.org/svnroot/bf-blender/trunk/lib/darwin

Hey guys, It's been a while. Had to sort out some technical stuff. I now have an implementation that is robust and reliable.

I assume the next steps would be to

1: upload a patch based on he current HEAD of the blender repo for review
2: implement feedback, back and forth until satisfactory
3: upload a build to graphicall for testing
4: opening a thread in a forum for feedback from users
5: gathering feedback here, bugtracker and forum

Is that correct? If not, how does this usually work?

Thanks for your support

Aurel

Great!

Those steps are entirely the right way to do it.

Hey guys

So I linked the diffs of two branches

D2530 is just the slim implementation with all the features
D2531 is D2530 plus the implementation of symmetric-dirichlet energy minimisation with ceres. It looks fun, but not very promising imho.

D2530: It can be used as follows: unwrap a model as usual. You can then set #SLIM-iteration, relative-scale w.r.t. pins, weight-influence, weight-maps ...

Using SLIM as a relaxation algorithm can be done by hitting ctrl. + M. I tried to get it into the UI like minimize stretch, but it refuses to show up ^^. Btw, I recently noticed that I so far forgot to implement relaxation of just a selected subset of vertices. So it still always relaxes all islands.

D2531: Same as D2530, but the #iterations parameter works differently when unwrapping regularly (not relaxation):

#iterations % 10 => #SLIM-Iterations

floor(#iterations / 10) * 10 => #ceres-iterations (CGNR / Iterative Schur seem to work well.)
Essentially, given #iterations = 1010 means 10 SLIM- and 100 ceres-iterations

@Brecht Van Lommel (brecht) / @Sergey Sharybin (sergey) Do you guys maybe have time to take a look at it?

Also, in order to build it it must be done WITH_CXX11 and the darwin libraries (not darwin-9.x... ). I usually build it without CYCLES and LLVM because that tends to not work - but I guess that's just some issue on my side and unrelated to the code changes.

Also, I added many files (~100) in extern/Eigen3/unsupported and intern/SLIM/ext/libigl_extract that my code needs.

Thank you

Hi Aurel and Brecht, I finally learn how to write your name LOL,

Here is a short video i wanted to demonstrate the different https://youtu.be/IKzra5Gjh5E

  • SLIM is pretty robust in term of reducing stretches but perform poorly in packing UVs, because it always rotate UVs.

I didn't test the uv pinning feature nor live unwrap. If there is some feature need to be tested please let me know :)

Thanks to both of you, can't wait to see it in master and start to use it as default

any chance for this in blender 2.81?