I use RGB data to locate pixels in my image texture.
I used an Combine XYZ node to convert Color values to Vector type to drive the Image Texture node's Vector input.
Now I need to use a Vector Transform node in the middle for the coordinates to have any effect on the texture mapping.
If I plug a straight RGB color output to the Vector input - the Image Texxture ignores it and uses mesh's UV mapping instead.
If I use Combine XYZ node to feed the Image Texture's Vector input - it's ignored also and default UV is used instead.
Gradient Texture node behaves differently though:
If RGB or Combine XYZ nodes are used to feed it - it returns black. Only works as expected if a Vector Transform node is in-between.
Looks like Combine XYZ node's Vector output is not good enough for Texture nodes for some reason, but pushing it through a Vector Transform (that does no change) makes the data acceptable and the textures are mapped as expected.
Here's a test project:
Roll around the RGB node's color circle - the color of the cube should change.
Connect either the raw RGB, Combine XYZ or Vector transfor output to the Vector reroute.
Disconnect the Vector Reroute to see UV-mapped texture on the cube (Image Texture and Gradient mixed).
All three sources should give the same output - because the numbers fed into the Vector reroute are essentially the same. Aren't they? So why only Vector Transform node output makes the color circle change the mapping of the texture on the Cube?
In Blender 2.78 only Vector Transform works as expected.
In Blender 2.77a Vector Transform and Combine XYZ give the same result.
I'd expect pure RGB plugged into Vector to also work here the same way - why one would not want this? Am I missing something?