gojek / darkroom Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://www.gojek.io/darkroom/
License: MIT License
Home Page: https://www.gojek.io/darkroom/
License: MIT License
I suggest adding separate documentation than what we have in README right now. It shall be generated from the source in /docs
folder and hosted on GitHub pages on every CI release job.
The code has good GoDoc coverage but there should be better documentation for the project showing how to use it. Users might not be familiar with all the features that Darkroom provides.
If you've other suggestions on how the docs should be built other than GH Pages, feel free to add them. Eg: ReadTheDocs, etc.
There are few cases where we want to serve GIF to our customers, but it will not work if we enable some of the parameters we support, such as: auto=compress
Currently, the behaviour is to return non successful response, leading to the client to not be able to show anything. At the very least, darkroom can just return the original file in case non jpeg/png file is being accessed with darkroom's supported parameters.
Which jobs are failing:
Since when has it been failing:
Since add documentation commit https://github.com/gojek/darkroom/commit/21c61f2d1e3f292ff4750f7e8483e83c5c19cd13
This is the job
https://travis-ci.org/gojek/darkroom/builds/577348025?utm_source=github_status&utm_medium=notification
On deploying application
stage
Reason for failure:
nothing to commit, working tree clean
remote: Invalid username or password.
fatal: Authentication failed for 'https://[email protected]/gojek/darkroom.git/'
Error: Git push failed
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Anything else we need to know:
Add support for publishing metrics to Prometheus.
It will be fantastic when the metrics not tied up to spikes, even better when we have Prometheus support for its better integration in other CNCF projects like Kubernetes.
-> Need to design a metrics RFC for supporting prometheus
fit=scale support
Scales the image to fit the constraining dimensions exactly. The resulting image will fill the dimensions, and will not maintain the aspect ratio of the input image.
To get resized images with fit type scale.
Installation method: Docker
Darkroom version: >= v0.0.5
Operating system: Ubuntu x86_64
Go version ('go version' output): go version go1.12.9 linux/amd64
docker pull gojektech/darkroom:v0.0.9
docker run -p 3000:3000 --env-file <path-to-darkroom.env> gojektech/darkroom:v0.0.9
standard_init_linux.go:211: exec user process caused "no such file or directory"
The server should run without errors.
This has most probably resulted from CGO_ENABLED
build pipeline.
Currently, darkroom doesn't read the image exif data and when an image is rotated, darkroom doesn't fix the orientation of it. I am suggesting that darkroom should check the orientation of the image based on the exif data and return as a straight image, given the auto=compress parameter being sent.
Some example on how imgix handles this:
It will be good to clean up these linter errors before it is causing problems. Also by following some simplification and code practice, it will make the code easier to maintain
from make lint
pkg/config/config.go:86:15: exported func Source returns unexported type *config.source, which can be annoying to use
pkg/processor/native/utils_test.go:162:41: exported func NewMockImage returns unexported type *native.mockImage, which can be annoying to use
pkg/service/manipulator.go:166:52: exported func NewManipulator returns unexported type *service.manipulator, which can be annoying to use
pkg/processor/native/encoder.go:79:1: comment on exported method Encoders.Options should be of the form "Options ..."
from golangci-lint
pkg/processor/native/processor_test.go:21:2: `badImage` is unused (structcheck)
badImage image.Image
^
pkg/processor/native/processor_test.go:181:11: ineffectual assignment to `err` (ineffassign)
img, _, err := s.processor.Decode(file)
^
internal/handler/ping.go:12:3: S1023: redundant `return` statement (gosimple)
return
^
pkg/server/api.go:63:2: S1005: '_ = <-ch' can be simplified to '<-ch' (gosimple)
_ = <-sig
^
pkg/service/manipulator_test.go:132:27: S1019: should use make([]byte, 10) instead (gosimple)
ImageData: make([]byte, 10, 10),
To Cleanup makefile
- We don't need to show the makefile command, can be achieved by using silent @
Do anyone want to pick this up or Can I pick this up?
It would be really great if Darkroom starts to support GPU acceleration for the image processing whenever possible.
The performance of the application server can be greatly improved if we utilize GPU for processing images when available.
Attaching some benchmarks performed by @sohamkamani to support this feature request.
Machine Type | CPU Platform | GPUs |
---|---|---|
n1-standard-1 (1 vCPU, 3.75 GB memory) | Intel Ivy Bridge | 1 x NVIDIA Tesla K80 |
#include <opencv2/highgui.hpp>
#include "cuda.hpp"
#include <opencv2/cudawarping.hpp>
#include <opencv2/imgproc.hpp>
#include "core.hpp"
#include <iostream>
#include <ctime>
using namespace std;
int main(int argc, char ** argv) {
string input_file = "sample.jpg";
string output_file = "out.jpg";
//Read input image from the disk
cv::Mat inputCpu = cv::imread(input_file, 1);
cv::cuda::GpuMat input(inputCpu);
if (input.empty()) {
cout << "Image Not Found: " << input_file << endl;
return -1;
}
//Create output image
cv::cuda::GpuMat output;
clock_t start = clock();
for (int i = 0; i < 20; i++) {
cv::cuda::resize(input, output, cv::Size(0, 0), .25, 0.25, 3); // downscale 4x on both x and y
}
clock_t d1 = clock() - start;
cout << "OpenCv Gpu code ran. Time:" << d1 << "\n";
cv::Mat outputCpu;
output.download(outputCpu);
cv::imwrite(output_file, outputCpu);
cv::Mat inputCpu2 = cv::imread(input_file, 1);
cv::Mat outputCpu2;
start = clock();
for (int i = 0; i < 20; i++) {
cv::resize(inputCpu, outputCpu, cv::Size(0, 0), .25, 0.25, 3);
}
clock_t d2 = clock() - start;
cout << "OpenCv Cpu code ran. Time:" << d2 << "\n";
input.release();
output.release();
return 0;
}
user@localhost:~$ ./a.out
OpenCv Gpu code ran. Time:226962
OpenCv Cpu code ran. Time:3231722
user@localhost:~$ ./a.out
OpenCv Gpu code ran. Time:226377
OpenCv Cpu code ran. Time:3277886
user@localhost:~$ ./a.out
OpenCv Gpu code ran. Time:217269
OpenCv Cpu code ran. Time:3254524
user@localhost:~$ ./a.out
OpenCv Gpu code ran. Time:184617
OpenCv Cpu code ran. Time:3342405
This shows that resizing an image with the GPU is 14-16 times
faster than with the CPU.
q
query parameters support in Manipulator that enables dynamic quality based on query
Currently all lossy images (JPEG & WebP) that is served by the same Processor will have the same quality, it would be nice to be able to specify q
query parameter
Add feature to perform blur operations on images.
I am working on it
Progressive JPEG support
Currently Darkroom only supports Baseline JPEG which will load from top to bottom.
There exists Progressive JPEG format that loads from low quality to full quality like this:
https://github.com/gojek/darkroom/blob/master/pkg/processor/README.md seems outdated, and describes interface incorrectly. You should update it or maybe just nuke it.
Open this image and you'll find that image not loaded at all.
.../darkroom/gofresh/v2/images/uploads/1c6561c7-87f8-4fa0-a480-f9386dbbed28_Shuffle-Subscription-In-Final.jpg?w=686&fit=crop&auto=compress
Old version support this query param
.../uploads/c0c65eee-9c64-49e9-af21-6e2378ec96fa.jpg?w=686&fit=crop&auto=compress
This query param include width, crop and compress and calculate height based on image ratio and width.
Currently, the S3 endpoint is automatically created based on the bucket name/region and it is specific to AWS only.
DigitalOcean also supports S3 based API and allows existing tools to work with their offering called Spaces.
Current behavior:
bucket:
name: test
region: ams3
Generated URL: test.s3.ams3.amazonaws.com
Required behaviour:
Generated URL: test.ams3.digitaloceanspaces.com
This will enable DigitalOcean users to utilize Darkroom over Spaces buckets.
Minimal change is required, S3's Go SDK supports specifying Endpoint in the configuration.
darkroom/pkg/metrics/statsd_collector.go
Line 73 in 982cf59
darkroom/pkg/processor/native/utils.go
Line 84 in b920443
I guess, those breaks aren't really required.
Darkroom storage HTTP byte-range request support
When utilizing Google CDN in front of an app (which use darkroom), and the app has a large file (>= 10MB), we can forward Google CDN request to "end storage" (e.g: s3, CloudFront, etc), and forwarding response that came from "end storage" to Google CDN
PR: #49
Adding overlaying a base image using multiple images across multiple positions (9 points)
We have a case where we need to add more than 1 overlays into a single image
The logic should be almost similar as the existing watermarking feature with few adjustments
Currently, the Processor
is getting bulky as we add more operations to it, this creates a problem for implementing new Processor
variants as they have to implement all the methods, even if they are out of scope for that processor.
The Processor should have an array of operations that it supports, each Operation
can be implemented independently and then the Processor can be composed by selecting these Operations. An RFC document can be prepared to have more clarity on what needs to be changed to achieve this.
Ref:
darkroom/pkg/processor/interface.go
Lines 5 to 39 in 9186130
helm charts
Having helm charts will help in 1 command to startup darkroom, enabling more users to try, onboard, and improve darkroom. The helm chart will enable easy local development, integration, and onboarding for darkroom.
Just need to use the golang binary in the Dockerfile, and run the server with some pre-configured application, the buckets can be interfacing, or created using some gsutil commands. This will help darkroom to be used by more users.
Add cluster management and portal to manage Darkroom deployments on Kubernetes.
Darkroom should be super easy to deploy and manage, this can be done in many ways, but one of the faster ones would be to introduce a Kubernetes Operator that manages the lifecycle of a darkroom app instance. This can be backed by a management UI that can abstract away the underlying details and make cluster operation very efficient.
I did a PoC for this which can be found here.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.