sedthh / pyxelate Goto Github PK
View Code? Open in Web Editor NEWPython class that generates pixel art from images
License: MIT License
Python class that generates pixel art from images
License: MIT License
When installing with the command provided in the README with Python 3.8 and numpy==1.24.3 I get the following error:
AttributeError: module 'numpy' has no attribute 'float'.
`np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'cfloat'?
Whenever I try to convert an image while giving height and width arguments I get a valueError from the factor being set. However, I only supplied the input, output, height, and width.
PS C:\...\pyxelate\pyxelate> python main.py "testIcon.jpg" "output.png" --height 32 --width 32
Pyxelating testIcon.jpg...
Traceback (most recent call last):
File "main.py", line 202, in <module>
main()
File "main.py", line 195, in main
convert(args)
File "main.py", line 37, in convert
pyx = get_model(args)
File "main.py", line 23, in get_model
return Pyx(
File "C:\...\pyxelate\pyxelate\pyx.py", line 121, in __init__
raise ValueError("You can only set either height + width or the downscaling factor, but not both!")
ValueError: You can only set either height + width or the downscaling factor, but not both!
Lines 120 to 121 in fbbcfbc
Seems like the argument parsing is defaulting the factor to 1
Line 91 in fbbcfbc
Setting the default to None
makes the command work as intended (outputs a 32x32 image)
I'm having a hard time wrapping my head around what's going on, but it looks like the problem area is just a shitload of 2x2 images going through convolution.
What if it was just one big image? For example, convolution on an image with this kernel:
0 0 0
0 0 1
0 0 0
should be the same as moving the image over 1 pixel, and convolution on an image with this kernel:
0 0 0
-1 0 1
0 0 0
should be the same as duplicating the image, moving one right 1 pixel, moving the other duplicate left 1 pixel and multiplying the values by -1, and adding them together.
If there's an issue with 2x2 blocks overlapping due to the entire image moving, I think you could do like mod(x - (floor(x / 2) * 2) + offset, 2) + (floor(x / 2) * 2)
on the texture coordinates when in 0 to (width-1) range rather than 0-1, thinking in GLSL at least. It's been a while since I've done GLSL though so please don't take my word for it.
Optimizing the transform
function as much as possible could be quite interesting for future image sequence conversions.
A bit of profiling seems to reveal that calls to skimage functions are the major bottleneck (most notably equalize_adapthist, resize, median, and rgb<->hsv conversions, which account for ~80% of the time spent in transform when boost=True).
These functions all have heavily optimized cv2 equivalents, maybe we could speed up the code by using them instead?
Timer unit: 1e-06 s
Total time: 0.374083 s
File: <ipython-input-6-3ea2832d4ec7>
Function: transform at line 315
Line # Hits Time Per Hit % Time Line Contents
==============================================================
315 def transform(self, X, y=None):
316 """Transform image to pyxelated version"""
317 1 26.0 26.0 0.0 assert self.is_fitted, "Call 'fit(image_as_numpy)' first before calling 'transform(image_as_numpy)'!"
318 1 4.0 4.0 0.0 h, w, d = X.shape
319 1 3.0 3.0 0.0 if self.find_palette:
320 1 3.0 3.0 0.0 assert h * w > self.palette, "Too many colors for such a small image! Use a larger image or a smaller palette."
321 else:
322 assert h * w > len(self.palette), "Too many colors for such a small image! Use a larger image or a smaller palette."
323
324 1 8.0 8.0 0.0 new_h, new_w = self._get_size(h, w) # get desired size depending on settings
325 1 3.0 3.0 0.0 if d > 3:
326 # image has alpha channel
327 X_ = self._dilate(X)
328 alpha_mask = resize(X_[:, :, 3], (new_h, new_w), anti_aliasing=True)
329 else:
330 # image has no alpha channel
331 1 2.0 2.0 0.0 X_ = X
332 1 2.0 2.0 0.0 alpha_mask = None
333 1 3.0 3.0 0.0 if self.depth:
334 # change size depending on the number of iterations
335 1 5.0 5.0 0.0 new_h, new_w = new_h * (self.sobel ** self.depth), new_w * (self.sobel ** self.depth)
336 1 49371.0 49371.0 13.2 X_ = resize(X_[:, :, :3], (new_h, new_w), anti_aliasing=True) # colors are now 0. - 1.
337
338 1 5.0 5.0 0.0 if self.boost:
339 # adjust contrast
340 1 113935.0 113935.0 30.5 X_ = rgb2hsv(equalize_adapthist(X_))
341 1 1638.0 1638.0 0.4 X_[:, :, 1:] *= self.HIST_BRIGHTNESS
342 1 45119.0 45119.0 12.1 X_ = hsv2rgb(np.clip(X_, 0., 1.))
343
344 # pyxelate iteratively
345 2 8.0 4.0 0.0 for _ in range(self.depth):
346 1 2.0 2.0 0.0 if self.boost and d == 3:
347 # remove noise
348 1 78951.0 78951.0 21.1 X_ = self._median(X_)
349 1 16055.0 16055.0 4.3 X_ = self._pyxelate(X_) # downsample in each iteration
350
351 1 3.0 3.0 0.0 final_h, final_w, _ = X_.shape
352 1 2.0 2.0 0.0 if self.find_palette:
353 1 63.0 63.0 0.0 X_ = ((X_ - .5) * self.SCALE_RGB) + .5 # values were already altered before in .fit()
354 1 9.0 9.0 0.0 reshaped = np.reshape(X_, (final_h * final_w, 3))
355
356 # add dithering
357 1 2.0 2.0 0.0 if self.dither is None or self.dither == "none":
358 probs = self.model.predict(reshaped)
359 X_ = self.colors[probs]
360 1 1.0 1.0 0.0 elif self.dither == "naive":
361 # pyxelate dithering based on BGM probability density
362 1 4953.0 4953.0 1.3 probs = self.model.predict_proba(reshaped)
363 1 92.0 92.0 0.0 p = np.argmax(probs, axis=1)
364 1 1055.0 1055.0 0.3 X_ = self.colors[p]
365 1 86.0 86.0 0.0 probs[np.arange(len(p)), p] = 0
366 1 116.0 116.0 0.0 p2 = np.argmax(probs, axis=1) # second best
367 1 517.0 517.0 0.1 v1 = np.max(probs, axis=1) > (1. / (len(self.colors) + 1))
368 1 612.0 612.0 0.2 v2 = np.max(probs, axis=1) > (1. / (len(self.colors) * self.DITHER_NAIVE_BOOST + 1))
369 1 2.0 2.0 0.0 pad = not bool(final_w % 2)
370 8763 10951.0 1.2 2.9 for i in range(0, len(X_), 2):
371 8762 11332.0 1.3 3.0 m = (i // final_w) % 2
372 8762 10834.0 1.2 2.9 if pad:
373 i += m
374 8762 10942.0 1.2 2.9 if m:
375 4312 6475.0 1.5 1.7 if v1[i]:
376 862 2319.0 2.7 0.6 X_[i] = self.colors[p2[i]]
377 4450 5665.0 1.3 1.5 elif v2[i]:
378 1065 2790.0 2.6 0.7 X_[i] = self.colors[p2[i]]
379 elif self.dither == "bayer":
380 # Bayer-like dithering
381 self._warn_on_dither_with_alpha(d)
382 probs = self.model.predict_proba(reshaped)
383 probs = [convolve(probs[:, i].reshape((final_h, final_w)), self.DITHER_BAYER_MATRIX, mode="reflect") for i in range(len(self.colors))]
384 probs = np.argmin(probs, axis=0)
385 X_ = self.colors[probs]
386 elif self.dither == "floyd":
387 # Floyd-Steinberg-like algorithm
388 self._warn_on_dither_with_alpha(d)
389 X_ = self._dither_floyd(reshaped, (final_h, final_w))
390 elif self.dither == "atkinson":
391 # Atkinson-like algorithm
392 self._warn_on_dither_with_alpha(d)
393 res = np.zeros((final_h + 2, final_w + 3), dtype=int)
394 X_ = np.pad(X_, ((0, 2), (1, 2), (0, 0)), "reflect")
395 for y in range(final_h):
396 for x in range(1, final_w+1):
397 pred = self.model.predict_proba(X_[y, x, :3].reshape(-1, 3))
398 res[y, x] = np.argmax(pred)
399 quant_error = (X_[y, x, :3] - self.model.means_[res[y, x]]) / 8.
400 X_[y, x+1, :3] += quant_error
401 X_[y, x+2, :3] += quant_error
402 X_[y+1, x-1, :3] += quant_error
403 X_[y+1, x, :3] += quant_error
404 X_[y+1, x+1, :3] += quant_error
405 X_[y+2, x, :3] += quant_error
406 # fix edges
407 res = res[:final_h, 1:final_w+1]
408 X_ = self.colors[res.reshape(final_h * final_w)]
409
410 1 14.0 14.0 0.0 X_ = np.reshape(X_, (final_h, final_w, 3)) # reshape to actual image dimensions
411 1 1.0 1.0 0.0 if alpha_mask is not None:
412 # attach lost alpha layer
413 alpha_mask[alpha_mask >= self.alpha] = 255
414 alpha_mask[alpha_mask < self.alpha] = 0
415 X_ = np.dstack((X_[:, :, :3], alpha_mask.astype(int)))
416
417 # return upscaled image
418 1 88.0 88.0 0.0 X_ = np.repeat(np.repeat(X_, self.upscale[0], axis=0), self.upscale[1], axis=1)
419 1 16.0 16.0 0.0 return X_.astype(np.uint8)
Hi,
First of all, what a great tool! This has a great creative potential and I have been using it for a while now.
One issue I'm having is, with image sequences I get some areas with dithering that are jumping around when run through pyxelate
. I know it is a difficult problem to solve as the shading of those areas change from frame to frame. I can try and share some test images if you are interested. I wish there was a way to lock the dithering samples?
What I tried, is generating the palette from a single image:
pyx = Pyx(upscale=1, factor=2, dither="naive", alpha=0.4, sobel=5, palette=7)
pyx.fit(init_image)
init_image
being a single image out of the whole sequence.
But that did not solve my issues.
The new version of the program runs significantly slower than the previous version (with the speedup provided by #18). A bit of profiling reveals the hog method as the main culprit.
I've implemented a few alternative algorithms I thought might show similar results, and benchmarked the time necessary to do a fit_transform using these different methods. I feel like the 2x2 sobel could be a pretty solid alternative to hog, what do you think?
(All benchmarks run with dither="naive", palette=6 and boost=True)
There's a small mistake in the labels, 3x3 sobel is in fact 2x2 and vice-versa
So, i looked a bit around online and stumbled across cupy
. A library that basicly wraps numpy functionality in a library that runs on the gpu to perform highly concurrent calculations faster.
I tinkered a bit around but didn't really got to a state to test it effectivly. Mainly because I am an absolute python scrub and also have no clue about image computation whatsoever. But I am hoping that someone else can implement it into the code, just to see if it gives any performance upgrade on larger images.
Currently it's not that trivial to setup an environment for it, but I got it running on my Arch Linux with a GeForce 1050 Ti.
cupy GitHub page
cupy installation instructions
Now there are only 2 ways to define the palette that one is full manual control while another is auto detection. However, when using auto detection on a image with rich color, it usually failed to detect some key color that is not occupying large area but it is critical (eg. Eyes, it is small but it does exist).
Therefore, I suggest for the auto detection, for example, allow 5 palette, that it can also accept manual defined color (as my above example, I can point out the eyes' color) within the 5 palette.
(This project is really a good work, thank you!)
Hi, running the example on ArchLinux with numpy 1.21.5 (downgraded from numpy 1.22.3 which is current), I’m getting this error:
File "/usr/bin/pyxelate", line 33, in <module>
sys.exit(load_entry_point('pyxelate==2.1.1', 'console_scripts', 'pyxelate')())
File "/usr/bin/pyxelate", line 25, in importlib_load_entry_point
return next(matches).load()
File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 171, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/usr/lib/python3.10/site-packages/pyxelate/__init__.py", line 11, in <module>
from .pyx import Pyx
File "/usr/lib/python3.10/site-packages/pyxelate/pyx.py", line 10, in <module>
from skimage.transform import resize
File "/usr/lib/python3.10/site-packages/skimage/__init__.py", line 151, in <module>
from ._shared import geometry
File "skimage/_shared/geometry.pyx", line 1, in init skimage._shared.geometry
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
No matter what is passed in via the command line, the parsed argument always evaluates to true. This means you can't disable dithering.
I played around a bit with active black contours like here before and after the application of the transformation and I think it would be a great addition to the library and should be pretty straight forward to implement. It can give a more comic-like look to the results.
The name is misleading, pixel art is not any image with 8 colours, its actually art made in a way that it appears to have depth ,shadows and highlights using pixels to minimize color banding, and in your case its full on 100% color banding.
Maybe aim for 16 colours or try to use dither patterns common in pixel art to make it look better.
I know it is not simple color limiter but the result still asthetically looks like color limiter.
When trying to pyxelate an image to large the whole process stops.
First it prints a warning Warning: Index 49153 is out of bounds for axis 0 with size 16384
but then continues to convert again in line 298 in pyx.py
.
Maybe catch the out of bounds error in a seperate except.
Complete trace:
Traceback (most recent call last):------------------ ] 0.0 %
File "pyx.py", line 289, in <module>:08 | Remaining: Calculating...
pyxelated = p.convert(image)
File "/dir/pyxelate/pyxelate.py", line 113, in convert
image = equalize_adapthist(image) * 255 * 1.14 # empirical magic number
File "/home/user/.local/lib/python3.8/site-packages/skimage/color/adapt_rgb.py", line 38, in image_filter_adapted
return apply_to_rgb(image_filter, image, *args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/skimage/color/adapt_rgb.py", line 60, in hsv_value
value = image_filter(value, *args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/skimage/exposure/_adapthist.py", line 85, in equalize_adapthist
image = _clahe(image, kernel_size, clip_limit * nbins, nbins)
File "/home/user/.local/lib/python3.8/site-packages/skimage/exposure/_adapthist.py", line 144, in _clahe
hist = lut[sub_img.ravel()]
IndexError: index 49153 is out of bounds for axis 0 with size 16384
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pyx.py", line 298, in <module>
pyxelated = p.convert(image)
File "/dir/pyxelate/pyxelate.py", line 113, in convert
image = equalize_adapthist(image) * 255 * 1.14 # empirical magic number
File "/home/user/.local/lib/python3.8/site-packages/skimage/color/adapt_rgb.py", line 38, in image_filter_adapted
return apply_to_rgb(image_filter, image, *args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/skimage/color/adapt_rgb.py", line 60, in hsv_value
value = image_filter(value, *args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/skimage/exposure/_adapthist.py", line 85, in equalize_adapthist
image = _clahe(image, kernel_size, clip_limit * nbins, nbins)
File "/home/user/.local/lib/python3.8/site-packages/skimage/exposure/_adapthist.py", line 144, in _clahe
hist = lut[sub_img.ravel()]
IndexError: index 49153 is out of bounds for axis 0 with size 16384
Hi, im having trouble installing pyxelate on anaconda, can u help? congratz btw it looks pretty neat
Hi, sedthh, this is a great work that I am really appreciate!
But I am confusing about the svd function.
X_ are in range[0., 1.] before calling self._svd.
Lines 362 to 365 in ae2de92
But in _svd function, the result still divide 255.
What is the purpose to cast the range of color to [0, 1. / 255.]?
Line 337 in ae2de92
Thank you for your great work, and I am looking forward to hearing from you!
When pyx.py is used with just a single input file in the current directory, i.e.:
python3 pyx.py -i myimage.png
following error is thrown:
Traceback (most recent call last):
File "pyx.py", line 222, in <module>
i_path, i_base = str(input_dir).rsplit('/', 1)
ValueError: not enough values to unpack (expected 2, got 1)
You shouldn't expect the user to always use absolut paths.
PS:
python3 pyx.py -i ./myimage.png
isn't working eigther.
Operating system: Windows 10
Package manager: Poetry
Python 3.9
Running via PyCharm
If I try to process an image larger than about 700x700, I get a NumPy error. But if I add factor > 2. There is no error. How can I fix it?
Traceback (most recent call last):
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 429, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\fastapi\applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Otorhin\PycharmProjects\pyxelate_api\app\routes.py", line 60, in get_image
transformed: bytes = pyx.transform(image)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\sklearn\utils\_set_output.py", line 140, in wrapped
data_to_wrap = f(self, X, *args, **kwargs)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\pyxelate\pyx.py", line 369, in transform
X_ = rgb2hsv(equalize_adapthist(X_))
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\skimage\color\adapt_rgb.py", line 38, in image_filter_adapted
return apply_to_rgb(image_filter, image, *args, **kwargs)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\skimage\color\adapt_rgb.py", line 62, in hsv_value
return color.hsv2rgb(hsv)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\skimage\_shared\utils.py", line 326, in fixed_func
return func(*args, **kwargs)
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\skimage\color\colorconv.py", line 371, in hsv2rgb
hi, np.stack([np.stack((v, t, p), axis=-1),
File "<__array_function__ internals>", line 180, in stack
File "C:\Users\Otorhin\AppData\Local\pypoetry\Cache\virtualenvs\pyxelate-api-A6rQyZ02-py3.9\lib\site-packages\numpy\core\shape_base.py", line 433, in stack
return _nx.concatenate(expanded_arrays, axis=axis, out=out)
File "<__array_function__ internals>", line 180, in concatenate
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 1.63 GiB for an array with shape (6, 3156, 3840, 3) and data type float64
I looked through the instructions and none of it made sense to me. ¯_(ツ)_/¯
Hey, I'm having an odd issue.
Environment:
OS: macOS Big Sur 10.16 20G165 x86_64
$ uname -a
Darwin slim-Macbook.local 20.6.0 Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64 x86_64 i386 MacBookPro15,2 Darwin
$ xcode-select --version
xcode-select version 2384
$ python
Python 3.9.7 (default, Sep 14 2021, 16:22:39)
[Clang 12.0.5 (clang-1205.0.22.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.version)
3.9.7 (default, Sep 14 2021, 16:22:39)
[Clang 12.0.5 (clang-1205.0.22.9)]
$ pip list
Package Version
--------------- ---------
cycler 0.10.0
imageio 2.9.0
joblib 1.0.1
kiwisolver 1.3.2
llvmlite 0.37.0
matplotlib 3.4.3
networkx 2.6.3
numba 0.54.0
numpy 1.20.3
Pillow 8.3.2
pip 21.2.3
pyparsing 2.4.7
python-dateutil 2.8.2
PyWavelets 1.1.1
scikit-image 0.18.3
scikit-learn 1.0
scipy 1.7.1
setuptools 57.4.0
six 1.16.0
threadpoolctl 2.2.0
tifffile 2021.8.30
When I clone the repository and run the example notebook everything runs perfectly.
However when I try to copy the example code and run it standalone (or simply edit the notebook with an image of my own) I get a segmentation fault.
Any idea why this is happening? The segfault occurs with the example blazkowicz.jpg image too, even though it runs fine in the notebook on first run.
I am using Python 3.9.7 by the way. I've also tried installing all deps in both a virtual env as well as my global. On a macbook, do I need more resources?
EDIT: attempting to run the notebook again (specifically the first example with blazkowicz) causes the jupyter kernel to die.
Here is my stacktrace:
ERROR:asyncio:Exception in callback <TaskWakeupMethWrapper object at 0x112d2a0d0>(<Future finis...C: 1\r\n\r\n'>)
handle: <Handle <TaskWakeupMethWrapper object at 0x112d2a0d0>(<Future finis...C: 1\r\n\r\n'>)>
Traceback (most recent call last):
File "/Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
RuntimeError: Cannot enter into task <Task pending name='Task-4' coro=<HTTP1ServerConnection._server_request_loop() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/http1connection.py:823> wait_for=<Future finished result=b'GET /api/co...PC: 1\r\n\r\n'> cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/ioloop.py:688]> while another task <Task pending name='Task-2' coro=<KernelManager._async_start_kernel() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/jupyter_client/manager.py:336>> is being executed.
ERROR:asyncio:Exception in callback <TaskWakeupMethWrapper object at 0x112f06ca0>(<Future finis...db1"\r\n\r\n'>)
handle: <Handle <TaskWakeupMethWrapper object at 0x112f06ca0>(<Future finis...db1"\r\n\r\n'>)>
Traceback (most recent call last):
File "/Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
RuntimeError: Cannot enter into task <Task pending name='Task-5' coro=<HTTP1ServerConnection._server_request_loop() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/http1connection.py:823> wait_for=<Future finished result=b'GET /kernel...9db1"\r\n\r\n'> cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/tornado/ioloop.py:688]> while another task <Task pending name='Task-2' coro=<KernelManager._async_start_kernel() running at /Users/red_rocket/.pyenv/versions/3.9.7/lib/python3.9/site-packages/jupyter_client/manager.py:336>> is being executed.
Diving deeper into the actual pyxelate source with the example code and using a local copy of pyxelate codebase, I pinpointed the exact call that causes the segfault in the pyx module: It is the BayesianGaussianMixture.fit() method. The specific line in pyx.py is the super().fit(X)
call on line 74
I also found a relevant issue regarding segfaults on the auto-sklearn repository: automl/auto-sklearn#688
Looks like there were some success using the the auto-sklearn docker container, but it does not appear pyxelate uses auto-sklearn, so I'm not sure if this docker container would fix the issue (I will attempt to run it in this container after posting).
I was wondering if there are some convolution operations that create black edges within the final image. It would be nice to have such a option to give the pixelated images a more "cartoony" look (e.g. when pixelating people).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.