Comments (8)
Hi @MelanieN,
What you are asking for is something we have implemented for CP and nonnegative CP, but not yet for Tucker.
If I understood your issue correctly, you would want to fit a nonnegative Tucker decomposition, say with factors (U,V,W) and a core G, but e.g. U and V are already known and fixed?
For CP we have an option skip_mode
which allows you to not update some of the modes. It would be fairly easy, in my opinion, to add this functionality to Tucker, if this is what you are looking for.
from tensorly.
@JeanKossaifi I see, sorry I misunderstood the issue. I don't ever use partial Tucker :)
We could definitively implement a partial nonnegative Tucker. I am working on nonnegative Tucker these days, so I might first add a skip_mode option at first; if I find the time I will also do the nonnegative_partial_tucker.
from tensorly.
Awesome! As a side note, going the other way may be easier: tucker is just partial_tucker
with no skipped mode.
Typically the main difference would just be when unfolding the tensor along the modes to optimize, we just have this skipped mode:
which is a more efficient way to compute the expression using the Kronecker and U_k = Id for the skipped modes:
from tensorly.
Also I just remembered, but we do have a skip-mode option in non_negative_tucker_hals
which is another algorithm for computing NTD that I would recommend giving a try.
So to solve your issue, although this is not ideal performance-wise, you can do the following:
1/ use non_negative_tucker_hals()
to compute NTD
2/ in the init=
field, choose a tucker tensor where you have identity matrices along the modes that should not be decomposed.
3/ in the fixed_modes=
field, put a list with the indexes of the modes where the tensor is not decomposed.
This should work just fine. Here is some tentative example code, if you want to decompose a tensor along modes 1 and 2 but keep the mode 0 fixed:
import tensorly as tl
from tensorly.decomposition import non_negative_tucker_hals
data = tl.abs(tl.randn([5,5,5]))
tucker_init = tl.tucker_tensor.TuckerTensor((tl.abs(tl.randn([5,3,3])),[tl.eye(5),tl.abs(tl.randn([5,3])), tl.abs(tl.randn([5,3]))]))
out = non_negative_tucker_hals(data, [5,3,3], init=tucker_init, fixed_modes=[0])
from tensorly.
Thanks @cohenjer. I think @MelanieN refers to the partial_tucker in which case we don't have any projection matrix at all along some of the modes.
We could use a full Tucker along with a skip_mode
param and create a (fixed) identity for that mode though ideally we'd just want to make an NN partial_tucker
and completely skip computation along the skip modes.
from tensorly.
Hi, thanks for the reply! It is indeed as @JeanKossaifi mentions, but I am not so familiar with tensor theory, just applying it. I look forward to any updates on this topic!
from tensorly.
Thank you, I'll give this a try!
from tensorly.
@MelanieN did this work for you?
from tensorly.
Related Issues (20)
- Init mode == "random" does not return the correct shape in initialize_tucker HOT 3
- It appears that partial_unfold works using sparse tensors, but it is not clear in the documentation
- Better random init of factorized tensors HOT 1
- svd_interface will throw an error if the number of rows of the matrix is smaller than it's columns HOT 1
- numpy.core._exceptions._ArrayMemoryError HOT 2
- Is there any t-product implementation code in tensorly?Thanks HOT 1
- More descriptive message when random PARAFAC2 rank is infeasible given shape HOT 1
- AssertionError: `tensorly.tt_tensor.validate_tt_rank` test HOT 1
- Randomised_CP function throws a Singular Matrix error HOT 2
- Tensor Conversion in TensorLy Does Not Preserve PyTorch Tensor dtype and device Attributes
- PARAFAC2 for missing data HOT 1
- Panel Dataset Time, Company, Feature HOT 1
- No attribute "device" when using Numpy backend HOT 3
- Remove MXNet from doc
- Parafac\parafac2 projection HOT 1
- Tensorly support float16/int8
- OOM issue HOT 6
- Question: CP decomposition of symmetric tensor with repeating factor matrix HOT 1
- How can I apply activation to each factor matrix? HOT 2
- Spiked tensor decomposition HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensorly.