kevinmackenzie / myonedriveclient Goto Github PK
View Code? Open in Web Editor NEWDiscontinued, See https://github.com/KevinMackenzie/LocalCloudStorage
License: GNU General Public License v3.0
Discontinued, See https://github.com/KevinMackenzie/LocalCloudStorage
License: GNU General Public License v3.0
Since this file gets renamed as if it was a remote delta, it gets filtered out like a remote delta, which is NOT desirable.
When a file is selected in the file explorer and the app tries to update its LastModified
, an exception gets thrown in DownloadedFileStore.SetItemLastModified
with message:
System.IO.IOException: 'The process cannot access the file 'File Path' because it is being used by another process.'
This method uses over 200MB of memory in debug mode, possibly due to the massive amount of recursion and parallelism that is used.
MsalUiRequiredException:
Null user was passed in AcquiretokenSilent API. Pass in a user object or call acquireToken authenticate.`
After adding a new instance with the OneDrive
service, this gets thrown before requesting the user log back in.
per the documentation on microsoft's website here (https://dev.onedrive.com/items/view_delta.htm) we should cache all deltas before applying them to the local state: "After you have finished receiving all the changes, you may apply them to your local state"
Sometimes when moving items locally, the metadata does not properly get updated on the remote metadata cache (duplicates of folder contents). On local, sometimes the size of the local metadata doubles.
When creating this project, I started with some sample code for the graph API, but never changed the README.MD to what the project actually is.
When looking at the list boxes, it appears that the remote requests hit the local, then bounce back to the remote where they stay in the outgoing list box and never make progress...
previously, I was attempting to set the last-modified of the remote file after uploading it so I could ensure files are in sync, but it looks like the remote last-modified will need to be forced upon the local file instead.
When local changes take too long to sync with remote or there are connection issues, we don't want to lose the events sent when local changes are made, so they should be converted into a set of actions that will take place when we regain connection/periodically. This is similar to the previous structure, but a local change will prompt an attempt to reprocess the queue. Local files that are created as a result of conflicts will automatically be uploaded due to these local event handlers.
When there are metadata without a valid ID (i.e. a generated one), these should periodically be scanned for/put into a separate dictionary to be processed for upload (or to check if the user wants to keep them).
Also, when changes are made and the app is not running, the app must see these changes. On startup, it checks time stamps of all existing items and compares them to the timestamps of the metadata and queues a change as appropriate. The metadata will then be iterated through to check for any deleted files. Items that do not exist in the metadata (new files) will be added to the metadata with invalid id (generated) and as the queue gets processed, these items should have valid ids coming in.
Right now, the request queue is being bypassed, but it may be desirable to have a similar structure so the user can see all of the requests to be made before they start (i.e. downloaded files). This queue could only be modified by the main thread and an async
method could process the queue. This should only really be relevant for remote file store requests because the local ones don't take any time.
It should be noted that while processing this queue, any "WaitForUser" requests must stop the queue from continuing (both local and remote) until the request has been Complete
d.
When the queue gets unexpectedly stopped, the old delta link will be used and we want to avoid downloading any files that are already up to date.
this was the intended result, but it might be worth looking into the benefits of keeping the folder last modified timestamps in sync.
The DownloadedFileStore
needs to know when a file gets uploaded because it needs to update its _localItems
with the id of the uploaded file. Perhaps it makes more sense to put that kind of metadata information in the ActiveSyncFileStore
and leave the DownloadedFileStore
to only worry about interacting with the file system (and switch to using the ILocalFileStore
interface more than IRemoteFileStoreDownload
)
AFAIK this concept is OneDrive specific. A feature such as this is important, but all components of the systems except for the specific implementations of the IRemoteFileStoreConnection
need support all additional data that the file store may save.
At this point, the \*ViewModel
classes implement much of the important control logic for the core application, but this is not entirely desirable. I do not have a solution at the time, but the ViewModel should be a layer between the control/UI and the data and a layer between the UI and the control.
Ideally, the view-model classes should be in their own assembly. and only depend on contracts and data. The data classes should have their own assembly too, leaving most of the code to the LocalCloudStorage
assembly. It may be wise to split that up to encourage even more decoupling (although the logic is somewhat well decoupled already).
When pulling down a delete request, it would be good to make sure the local file hasn't been modified since then, because losing work is always bad.
The current UI does not support any pausing. This should be done through a context menu with a few options of times to pause for (1 hour, 2 hours, 4 hours, indefinitely, or choose your own)
while the code is functional without this, it is kind of a pain to need to check the sha1 sum of the local file after just uploading it and checking for remote deltas. There is currently no way to communicate to the local file store what the remote last modified timestamp is.
When the app is closed when the queue is being processed, we want to exit gracefully, which means saving the progress of the queue in a file and also saving the metadata.
With the new BufferedRemoteFileStoreInterface
this is not done yet. It should do something similar as the ActiveSyncFileStore
but in a way that communicates to an external source. ("keep local, keep remote, keep both" kind of thing). All involved requests should be put into the limbo dictionary until they are explicitly canceled. This kind of logic should be done in ProcessQueue
before making requests with the IRemoteFileStoreConnection
.
Right now this is in the contructer and using the .Wait()
method on the task, but it would be ideal to do this asynchronously.
it isn't ideal, but it is important to do a deep scan on startup to ensure all of the metadata is up to date and then apply those deltas to the remote. If no metadata exists at all, it would be a good idea to not apply the deltas and just use FileStoreBridge.GenerateLocalMetadataAsync
.
When there are existing files/folders when attempting to sync, some exceptions get thrown somewhere, but I haven't put any time into seeing what they are yet. It would be good to test all parts of the ApplyAllDeltas
method
We need to track the progress of the download/upload of an item, but the HttpClient
does not support that natively. Since the HttpClient is fairly decoupled from the actual OneDriveRemoteFileStoreConnection
it wouldn't be unreasonable to create a web/http class that wraps the necessary tools to get progress updates.
An exception gets thrown due to the custom TSafeObservableCollection
class. here's the message:
One or more of the following sources may have raised the wrong events:
System.Windows.Controls.ItemContainerGenerator System.Windows.Controls.ItemCollection System.Windows.Data.ListCollectionView
* LocalCloudStorage.TSafeObservableCollection`1[[LocalCloudStorage.ViewModel.FileStoreRequestViewMo>delBase, LocalCloudStorage, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]]
(The starred sources are considered more likely to be the cause of the problem.)
Since the local updates are synced on the same timer as the remote ones, they don't sync immediately, but that is critical to smooth performance. Since the local file state is always in flux, we want to make sure that our metadata is always as up to date as possible and that the local changes are always uploaded to the remote. One option is to have a very fast (5-10 second) update cycle, but it is also possible to upload these changes immediately.
When requesting the download url from an item manually, we get a public/pre-authenticated URL with the prefix public-
or something similar, but with the @microsoft.graph.downloadUrl
we get from the DriveItem
list in the delta
page, the URL is not pre-authenticated and using the typical authentication used in other requests always returns a 401 (not authorized).
Renamed items that call the FileSystemWatcher
event handler in the ActiveSyncFileStore
do not clearly differentiate between the new/old item names. Does the event args give us the new item name or the old item name?
Obviously, the hashes will not represent the hashes of the actual contents, so this will need to be handled.
All of the methods in OneDriveRemoteFileStoreConnection
use the same AuthenticatedHttpRequestAsync
method, but do no real error checking. This is a must in the future.
We really want to be able to pause this processing, so there must be some sort of signaling system for the processing of the queue to pause, but not cancel completely. Maybe a PauseToken
is what we need.
Right now, the ActiveSyncFileStore
prioritizes remote changes over local ones, but this is probably not the desired behavior. There needs to be timestamp checks on most operations to ensure that local changes are not lost.
Since all of the processing gets done on a background thread for each instance of the cloud storage, the PropertyChanged
events are sent from these background threads and the UI elements are NOT enjoying that (lots of exceptions).
It is possible to add new instances, but removing them is not possible in the UI.
When using the Keep Remote
option, the local version is still kept.
I'm not really sure how/why and I can't recreate the issue, but sometimes it looks like a SHA1 hash gets split and there is a bunch of garbage at the end of the json text.
When renaming folders locally, the event handler only modifies the metadata of the folder that is renamed, but it does not update the metadata of the children. It is unclear whether the children of the folder also get rename events sent too.
ActiveSyncFileStore.ApplyAllDeltas
is pretty monolithic. The individual actions for each case should have their own method in case somewhere else ActiveSyncFileStore
needs the same behavior to a delta case. This might not be important, or might result in many methods that are unsafe to call in many situations, so it could be bad.
This should just be an adaptation of the original scan for metadata method. This should also upload/delete/change remote files.
What this means is that the local file will always overwrite the remote file regardless of which one was updated last... so long as the local item deltas are processed first.
When an item is downloading, it will successfully go into "In Progress" mode on the list box, but the progress bar and message of bytes complete / bytes total do not update. Why is this happening?
Right now, no data gets loaded on startup for existing cloud storage instances and global settings, but obviously, this is a crucial feature. Where should this file go? Probably some kind of path given to the LocalCloudStorageViewModel
by the consuming application.
When asking the user for how they want something handled, future items in the queue may conflict with whatever item the user is currently dealing with. It may be favorable to pause the queue during this time.
When a file is locally modified, the remote will not get the local last-modified time stamp. Instead, Onedrive will get the timestamp of when the file is uploaded, which will always be after.
It doesn't seem like this variable is doing much of anything, especially in the LocalFileStoreInterface
. Perhaps it would be good to make this variable that seems like a "failure" variable do something more useful, like replace the check for limbo requests for pausing the queue.
When this option is selected either before creation, or while the instance is running, how should this be handled? Is the single file store bridge sufficient with a switching setting, or should there be something else? This is yet another option that gets updated in the Control like the BlackList
and the RemoteDeltaFrequency
.
When there is a delta that moves a folder with contents, all children also need to be moved.
It is very important that the implementations that access the file system will ensure that none of the requests result in changes to the file system outside of the directory of the PathRoot
. This could be a huge security issue if a malicious plugin is used.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.