anydistro / bxt Goto Github PK
View Code? Open in Web Editor NEWNext generation repository maintenance tool (WIP)
License: GNU Affero General Public License v3.0
Next generation repository maintenance tool (WIP)
License: GNU Affero General Public License v3.0
First time authentication is done with username and password, but the result is set as if the it is a webbrowser action - using a token.
Not all scripting tools are capable of reading the content of a cookiejar or setting the content on subsequent requests - but they can all read and parse the response body.
Credentials should be returned as a json response body which can be immediately understood by the machine.
Please see the examples in #54 (comment)
(from @fhdk)
To streamline the development process, CI (via GitHub Actions) should be added at some point.
401 is for login being expired or user don't have right credentials: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
403 is for unauthorized access: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
bxt uses 401 for both, needs to be fixed so frontend could act properly, related to #48.
xz-utils
(maybe other archivers) should be installed in production container for it to be able to handle archive of that type.
Sadly the application lacks any testing which is quite crucial for the application of such size. My initial idea was to cover the application with tests after the first "functional" release when all the architecture details will be stabilized.
The daemon part is written in domain-driven design which should in theory make various testing strategies (e.g. Unit, BDD, etc) not that hard to implement.
It was a part of original design but I never used it for testing and pretty sure hardcoded some of paths.
A standard name is "master" or "main" for development branches.
The use of reflect-cpp library instead of manual serialization will reduce a decent amount of boilerplate
For now sync is box-wide, e.g. will synchronize all the repos/architectures for sync-branches (unstable by default). This is not desirable behavior as this will break release cycles that are different for different architectures.
From what I know it's better suited for online services.
This is more crucial for the first sync but anyway would be a good feature to have. If synchronization breaks up for some error (e.g. network outage or a list of packages have changed during the sync) it would be cool to make bxt pick up already downloaded packages from the last sync and download only missing ones.
Operations (requests) that take a long time are not indicated in the web ui. Sometimes it makes sense to indicate that operations so user will understand something's happening.
Related to #22
When a compare request is posted - the request contains a json body with a list of boxes to compare.
[
{
"branch": "unstable",
"repository": "core",
"architecture": "x86_64"
},
{
"branch": "testing",
"repository": "core",
"architecture": "x86_64"
}
]
The answer only contains boxes known to the database. A short excerpt
{
"sections":
[
{
"branch": "unstable",
"repository": "core",
"architecture": "x86_64"
}
],
"compareTable":
{
"flex":
{
"unstable/core/x86_64":
{
"sync": "2.6.4-5"
}
},
"logrotate":
{
"unstable/core/x86_64":
{
"sync": "3.21.0-2"
}
}
}
}
Is this result to be expected if a requested box does not exist?
Would it be an empty list if none of the boxes exist?
[
{
"branch": "stable",
"repository": "core",
"architecture": "x86_64"
},
{
"branch": "testing",
"repository": "core",
"architecture": "x86_64"
}
]
That one answered itself π
{
"message": "No compare data found (all sections are empty)",
"status": "error"
}
Permissions should be implement as a part of users configuration.
The permissions scheme is quite advanced to make it configurable in a fancy way. The easiest way to implement it would be a plain text editing with validation, we can revisit this later.
The frontend was made using the "Create React App" tool by Facebook which enables an easy first start of the React application. Apparently CRA is not developed anymore and NextJS would be a good alternative. There is a plenty of guides describing the migration process.
This project follows DDD and for better understanding/developers onboarding it makes sense to describe components it uses and add proper references
DI setup should be more granular i.e. have different sets of injected dependencies (mocked vs real ones).
Verifying a token using /api/verify
endpoint returns 401.
The expected result on a valid token
{
"status": "ok"
}
We should decide on what we should do if package in pool already exists but it's a different file. Possible solutions:
In my opinion the option 2 is the best one.
Right now it doesn't logout and just says that token is expired. Makes sense to have both.
There are some places in code where hardcoded security keys are used.
The user management endpoints is using POST for all actions.
Best practise is to use the relevant verb for the associated action
verb | action |
---|---|
POST | Create |
GET | Read |
PUT | Update |
DELETE | Delete |
By oversight bxt doesn't have an option to Delete already added packages
It would also be good to have a Copy option that is like Snap but for individual packages and Move for Copy+Delete
With recent changes ArchRepoSyncService
became a bit too large, this needs to be addressed.
For whatever reason synchronization triggers twice when called via API
Passwords are stored in a plain text, we need to address this before the final release
We need to document our public REST API
Related to #22
I did a snap from unstable to testing
{
"source": {
"branch": "unstable",
"repository": "core",
"architecture": "x86_64"
},
"target": {
"branch": "testing",
"repository": "core",
"architecture": "x86_64"
}
}
Getting OK as expected
{
"status": "ok"
}
Then retry compare thus getting the expected result
{
"sections":
[
{
"branch": "testing",
"repository": "core",
"architecture": "x86_64"
},
{
"branch": "unstable",
"repository": "core",
"architecture": "x86_64"
}
],
"compareTable":
{
"lib32-glibc":
{
"testing/core/x86_64":
{
"sync": "2.39+r52+gf8e4623421-1"
},
"unstable/core/x86_64":
{
"sync": "2.39+r52+gf8e4623421-1"
}
},
"libsecret":
{
"unstable/core/x86_64":
{
"sync": "0.21.4-1"
},
"testing/core/x86_64":
{
"sync": "0.21.4-1"
}
},
[....]
Then I pulled the package log - and I was expecting to see the snap I made appear in the log - but the log appears the same - the timestamps are from 11years ago
One example is a linux-firmware-whence where the filename is from May 10. 2024 but the timestamp is quite off
{
"id": "df2a78a5-7d42-4849-92b9-ee124cdb7117",
"time": -1355047995,
"type": "Add",
"package":
{
"name": "linux-firmware-whence",
"section":
{
"branch": "unstable",
"repository": "core",
"architecture": "x86_64"
},
"poolEntries":
{
"sync":
{
"version": "20240510.b9d2bf23-1",
"hasSignature": true
}
},
"preferredLocation": "sync"
}
}
$ date -d @-1355047995
sΓΈn 23 jan 14:46:45 CET 1927
$ date -d @1355047995
sΓΈn 9 dec 11:13:15 CET 2012
Under some conditions Exporter stops in the middle of process
Right now transactions are very limited and in fact every single addition of entity creates a separate underlying LMDB transaction. It's much more preferable to abstract the transaction out and allow application services/repositories to manage them.
Pool path in bxt uses a different structure than BoxIt by default to allow more straightforward way to work with architectures.
old one is:
.
βββ overlay-arm
βββ overlay
βββ sync-arm
βββ sync
while the new one is:
.
βββ automated
βΒ Β βββ aarch64
βΒ Β βββ x86_64
βββ overlay
βΒ Β βββ aarch64
βΒ Β βββ x86_64
βββ sync
βββ aarch64
βββ x86_64
We might want to make this structure more flexible though to support legacy structure and potentially more. The best solution in my opinion is to put box.pool
setting into the box.yml
scheme file like this:
branches: [unstable, testing, stable]
repositories:
[core]:
architecture: x86_64
(box.pool):
template: "/{location}/"
[core]:
architecture: aarch64
(box.pool):
template: "/{location}-arm/"
Signature can't be uploaded from UI as it's treated as a separate package.
I already have a fix, will address along with other small issues.
While auth/permissions system is done and working, actual permissions checks still have to be added to all endpoints.
Logs are still half-baked
Right now sync being a "lengthy operation" shows an indication that it's going on. It would be nice to have a detailed sync info available in a web ui (maybe a full sync log but actually can be just a realtime progression). That's pretty much easily implementable as websockets support this type of work.
While web UI is pretty convenient, we might need to have a CLI version for scripting and other use cases when ui is too excessive
Right now the sync doesn't check if the device has the required disk space, it would be good to check if we can successfully finish the sync.
Now the manual uploading ui works very weird and doesn't really do what it means to, i.e. allow to upload multiple packages into different sections. The fix is relatively easy though.
For different builds (debug, release, CI).
https://martin-fieber.de/blog/cmake-presets/
https://cmake.org/cmake/help/latest/manual/cmake-presets.7.html
Implement token expiration and refreshment in some period of time.
Dependency injection is actually a good way to decouple things but in C++ it requires either some template magic or, as in case with Kangaru framework used in bxt, static mapping. For now it's handcrafted in di.h and it really should not work like that.
Possible solutions:
libclang
and it's Python bindings.A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.