GithubHelp home page GithubHelp logo

dwt / fluent Goto Github PK

View Code? Open in Web Editor NEW
60.0 3.0 2.0 642 KB

Python wrapper for stdlib (and other) objects to give them a fluent interface.

License: ISC License

Python 100.00%
python-library fluent-interface convenience-methods wrapper python3

fluent's Introduction

fluentpy - The fluent Python library

Fluentpy provides fluent interfaces to existing APIs such as the standard library, allowing you to use them in an object oriented and fluent style.

Fluentpy is inspired by JavaScript's jQuery and underscore / lodash and takes some inspiration from the collections API in Ruby and SmallTalk.

Please note: This library is based on an wrapper, that returns another wrapper for any operation you perform on a wrapped value. See the section Caveats below for details.

See Fowler, Wikipedia for definitions of fluent interfaces.

Documentation Status CircleCI Dependable API Evolution

Motivation: Why use fluentpy?

Many of the most useful standard library methods such as map, zip, filter and join are either free functions or available on the wrong type or module. This prevents fluent method chaining.

Let's consider this example:

>>> list(map(str.upper, sorted("ba,dc".split(","), reverse=True)))
['DC', 'BA']

To understand this code, I have to start in the middle at "ba,dc".split(","), then backtrack to sorted(…, reverse=True), then to list(map(str.upper, …)). All the while making sure that the parentheses all match up.

Wouldn't it be nice if we could think and write code in the same order? Something like how you would write this in other languages?

>>> _("ba,dc").split(",").sorted(reverse=True).map(str.upper)._
['DC', 'BA']

"Why no, but python has list comprehensions for that", you might say? Let's see:

>>> [each.upper() for each in sorted("ba,dc".split(","), reverse=True)]
['DC', 'BA']

This is clearly better: To read it, I have to skip back and forth less. It still leaves room for improvement though. Also, adding filtering to list comprehensions doesn't help:

>>> [each.upper() for each in sorted("ba,dc".split(","), reverse=True) if each.upper().startswith('D')]
['DC']

The backtracking problem persists. Additionally, if the filtering has to be done on the processed version (on each.upper().startswith()), then the operation has to be applied twice - which sucks because you write it twice and compute it twice.

The solution? Nest them!

>>> [each for each in 
        (inner.upper() for inner in sorted("ba,dc".split(","), reverse=True))
        if each.startswith('D')]
['DC']

Which gets us back to all the initial problems with nested statements and manually having to check closing parentheses.

Compare it to this:

>>> processed = []
>>> parts = "ba,dc".split(",")
>>> for item in sorted(parts, reverse=True):
>>>     uppercases = item.upper()
>>>     if uppercased.startswith('D')
>>>         processed.append(uppercased)

With basic Python, this is as close as it gets for code to read in execution order. So that is usually what I end up doing.

But it has a huge drawback: It's not an expression - it's a bunch of statements. That makes it hard to combine and abstract over it with higher order methods or generators. To write it you are forced to invent names for intermediate variables that serve no documentation purpose, but force you to remember them while reading.

Plus (drumroll): parsing this still requires some backtracking and especially build up of mental state to read.

Oh well.

So let's return to this:

>>> (
    _("ba,dc")
    .split(",")
    .sorted(reverse=True)
    .map(str.upper)
    .filter(_.each.startswith('D')._)
    ._
)
('DC',)

Sure you are not used to this at first, but consider the advantages. The intermediate variable names are abstracted away - the data flows through the methods completely naturally. No jumping back and forth to parse this at all. It just reads and writes exactly in the order it is computed. As a bonus, there's no parentheses stack to keep track of. And it is shorter too!

So what is the essence of all of this?

Python is an object oriented language - but it doesn't really use what object orientation has taught us about how we can work with collections and higher order methods in the languages that came before it (I think of SmallTalk here, but more recently also Ruby). Why can't I make those beautiful fluent call chains that SmallTalk could do 30 years ago in Python?

Well, now I can and you can too.

Features

Importing the library

It is recommended to rename the library on import:

>>> import fluentpy as _

or

>>> import fluentpy as _f

I prefer _ for small projects and _f for larger projects where gettext is used.

Super simple fluent chains

_ is actually the function wrap in the fluentpy module, which is a factory function that returns a subclass of Wrapper(). This is the basic and main object of this library.

This does two things: First it ensures that every attribute access, item access or method call off of the wrapped object will also return a wrapped object. This means, once you wrap something, unless you unwrap it explicitly via ._ or .unwrap or .to(a_type) it stays wrapped - pretty much no matter what you do with it. The second thing this does is that it returns a subclass of Wrapper that has a specialized set of methods, depending on the type of what is wrapped. I envision this to expand in the future, but right now the most useful wrappers are: IterableWrapper, where we add all the Python collection functions (map, filter, zip, reduce, …), as well as a good batch of methods from itertools and a few extras for good measure. CallableWrapper, where we add .curry() and .compose() and TextWrapper, where most of the regex methods are added.

Some examples:

# View documentation on a symbol without having to wrap the whole line it in parantheses
>>> _([]).append.help()
Help on built-in function append:

append(object, /) method of builtins.list instance
    Append object to the end of the list.

# Introspect objects without awkward wrapping stuff in parantheses
>>> _(_).dir()
fluentpy.wrap(['CallableWrapper', 'EachWrapper', 'IterableWrapper', 'MappingWrapper', 'ModuleWrapper', 'SetWrapper', 'TextWrapper', 'Wrapper', 
'_', '_0', '_1', '_2', '_3', '_4', '_5', '_6', '_7', '_8', '_9', 
…
, '_args', 'each', 'lib', 'module', 'wrap'])
>>> _(_).IterableWrapper.dir()
fluentpy.wrap(['_', 
…, 
'accumulate', 'all', 'any', 'call', 'combinations', 'combinations_with_replacement', 'delattr', 
'dir', 'dropwhile', 'each', 'enumerate', 'filter', 'filterfalse', 'flatten', 'freeze', 'get', 
'getattr', 'groupby', 'grouped', 'hasattr', 'help', 'iaccumulate', 'icombinations', '
icombinations_with_replacement', 'icycle', 'idropwhile', 'ieach', 'ienumerate', 'ifilter', 
'ifilterfalse', 'iflatten', 'igroupby', 'igrouped', 'imap', 'ipermutations', 'iproduct', 'ireshape', 
'ireversed', 'isinstance', 'islice', 'isorted', 'issubclass', 'istar_map', 'istarmap', 'itee', 
'iter', 'izip', 'join', 'len', 'map', 'max', 'min', 'permutations', 'pprint', 'previous', 'print', 
'product', 'proxy', 'reduce', 'repr', 'reshape', 'reversed', 'self', 'setattr', 'slice', 'sorted', 
'star_call', 'star_map', 'starmap', 'str', 'sum', 'to', 'type', 'unwrap', 'vars', 'zip'])

# Did I mention that I hate wrapping everything in parantheses?
>>> _([1,2,3]).len()
3
>>> _([1,2,3]).print()
[1,2,3]

# map over iterables and easily curry functions to adapt their signatures
>>> _(range(3)).map(_(dict).curry(id=_, delay=0)._)._
({'id': 0, 'delay': 0}, {'id': 1, 'delay': 0}, {'id': 2, 'delay': 0})
>>> _(range(10)).map(_.each * 3).filter(_.each < 10)._
(0, 3, 6, 9)
>>> _([3,2,1]).sorted().filter(_.each<=2)._
[1,2]

# Directly work with regex methods on strings
>>> _("foo,  bar,      baz").split(r",\s*")._
['foo', 'bar', 'baz']
>>> _("foo,  bar,      baz").findall(r'\w{3}')._
['foo', 'bar', 'baz']

# Embedd your own functions into call chains
>>> seen = set()
>>> def havent_seen(number):
...     if number in seen:
...         return False
...     seen.add(number)
...     return True
>>> (
...     _([1,3,1,3,4,5,4])
...     .dropwhile(havent_seen)
...     .print()
... )
(1, 3, 4, 5, 4)

And much more. Explore the method documentation for what you can do.

Imports as expressions

Import statements are (ahem) statements in Python. This is fine, but can be really annoying at times.

The _.lib object, which is a wrapper around the Python import machinery, allows to import anything that is accessible by import to be imported as an expression for inline use.

So instead of

>>> import sys
>>> input = sys.stdin.read()

You can do

>>> lines = _.lib.sys.stdin.readlines()._

As a bonus, everything imported via lib is already pre-wrapped, so you can chain off of it immediately.

Generating lambdas from expressions

lambda is great - it's often exactly what the doctor ordered. But it can also be annoying if you have to write it down every time you just want to get an attribute or call a method on every object in a collection. For Example:

>>> _([{'fnord':'foo'}, {'fnord':'bar'}]).map(lambda each: each['fnord'])._
('foo', 'bar')

>>> class Foo(object):
>>>     attr = 'attrvalue'
>>>     def method(self, arg): return 'method+'+arg
>>> _([Foo(), Foo()]).map(lambda each: each.attr)._
('attrvalue', 'attrvalue')

>>> _([Foo(), Foo()]).map(lambda each: each.method('arg'))._
('method+arg', 'method+arg')

Sure it works, but wouldn't it be nice if we could save a variable and do this a bit shorter?

Python does have attrgetter, itemgetter and methodcaller - they are just a bit inconvenient to use:

>>> from operator import itemgetter, attrgetter, methodcaller
>>> __([{'fnord':'foo'}, {'fnord':'bar'}]).map(itemgetter('fnord'))._
('foo', 'bar')
>>> _([Foo(), Foo()]).map(attrgetter('attr'))._
('attrvalue', 'attrvalue')

>>> _([Foo(), Foo()]).map(methodcaller('method', 'arg'))._
('method+arg', 'method+arg')

_([Foo(), Foo()]).map(methodcaller('method', 'arg')).map(str.upper)._
('METHOD+ARG', 'METHOD+ARG')

To ease this, _.each is provided. each exposes a bit of syntactic sugar for these (and the other operators). Basically, everything you do to _.each it will record and later 'play back' when you generate a callable from it by either unwrapping it, or applying an operator like `+ - * / <', which automatically call unwrap.

>>>  _([1,2,3]).map(_.each + 3)._
(4, 5, 6)

>>> _([1,2,3]).filter(_.each < 3)._
(1, 2)

>>> _([1,2,3]).map(- _.each)._
(-1, -2, -3)

>>> _([dict(fnord='foo'), dict(fnord='bar')]).map(_.each['fnord']._)._
('foo', 'bar')

>>> _([Foo(), Foo()]).map(_.each.attr._)._
('attrvalue', 'attrvalue')

>>> _([Foo(), Foo()]).map(_.each.method('arg')._)._
('method+arg', 'method+arg')

>>> _([Foo(), Foo()]).map(_.each.method('arg').upper()._)._
('METHOD+ARG', 'METHOD+ARG')
# Note that there is no second map needed to call `.upper()` here!

The rule is that you have to unwrap ._ the each object to generate a callable that you can then hand off to .map(), .filter() or wherever you would like to use it.

Chaining off of methods that return None

A major nuisance for using fluent interfaces are methods that return None. Sadly, many methods in Python return None, if they mostly exhibit a side effect on the object. Consider for example list.sort(). But also all methods that don't have a return statement return None. While this is way better than e.g. Ruby where that will just return the value of the last expression - which means objects constantly leak internals - it is very annoying if you want to chain off of one of these method calls.

Fear not though, Fluentpy has you covered. :)

Fluent wrapped objects will have a self property, that allows you to continue chaining off of the previous 'self' object.

>>> _([3,2,1]).sort().self.reverse().self.call(print)

Even though both sort() and reverse() return None.

Of course, if you unwrap at any point with .unwrap or ._ you will get the true return value of None.

Easy Shell Filtering with Python

It could often be super easy to achieve something on the shell, with a bit of Python. But, the backtracking (while writing) as well as the tendency of Python commands to span many lines (imports, function definitions, ...), makes this often just impractical enough that you won't do it.

That's why fluentpy is an executable module, so that you can use it on the shell like this:

$ echo 'HELLO, WORLD!' \
    | python3 -m fluentpy "lib.sys.stdin.readlines().map(str.lower).map(print)"
hello, world!

In this mode, the variables lib, _ and each are injected into the namespace of of the python commands given as the first positional argument.

Consider this shell text filter, that I used to extract data from my beloved but sadly pretty legacy del.icio.us account. The format looks like this:

$ tail -n 200 delicious.html|head
<DT><A HREF="http://intensedebate.com/" ADD_DATE="1234043688" PRIVATE="0" TAGS="web2.0,threaded,comments,plugin">IntenseDebate comments enhance and encourage conversation on your blog or website</A>
<DD>Comments on static websites
<DT><A HREF="http://code.google.com/intl/de/apis/socialgraph/" ADD_DATE="1234003285" PRIVATE="0" TAGS="api,foaf,xfn,social,web">Social Graph API - Google Code</A>
<DD>API to try to find metadata about who is a friend of who.
<DT><A HREF="http://twit.tv/floss39" ADD_DATE="1233788863" PRIVATE="0" TAGS="podcast,sun,opensource,philosophy,floss">The TWiT Netcast Network with Leo Laporte</A>
<DD>Podcast about how SUN sees the society evolve from a hub and spoke to a mesh society and how SUN thinks it can provide value and profit from that.
<DT><A HREF="http://www.xmind.net/" ADD_DATE="1233643908" PRIVATE="0" TAGS="mindmapping,web2.0,opensource">XMind - Social Brainstorming and Mind Mapping</A>
<DT><A HREF="http://fun.drno.de/pics/What.jpg" ADD_DATE="1233505198" PRIVATE="0" TAGS="funny,filetype:jpg,media:image">What.jpg 480×640 pixels</A>
<DT><A HREF="http://fun.drno.de/pics/english/What_happens_to_your_body_if_you_stop_smoking_right_now.gif" ADD_DATE="1233504659" PRIVATE="0" TAGS="smoking,stop,funny,informative,filetype:gif,media:image">What_happens_to_your_body_if_you_stop_smoking_right_now.gif 800×591 pixels</A>
<DT><A HREF="http://www.normanfinkelstein.com/article.php?pg=11&ar=2510" ADD_DATE="1233482064" PRIVATE="0" TAGS="propaganda,israel,nazi">Norman G. Finkelstein</A>

$ cat delicious.html | grep hosting \                                                                               :(
   | python3  -c 'import sys,re; \
       print("\n".join(re.findall(r"HREF=\"([^\"]+)\"", sys.stdin.read())))'
https://uberspace.de/
https://gitlab.com/gitlab-org/gitlab-ce
https://www.docker.io/

Sure it works, but with all the backtracking problems I talked about already. Using fluentpy this could be much nicer to write and read:

 $ cat delicious.html | grep hosting \
     | python3 -m fluentpy 'lib.sys.stdin.read().findall(r"HREF=\"([^\"]+)\"").map(print)'  
https://uberspace.de/
https://gitlab.com/gitlab-org/gitlab-ce
https://www.docker.io/

Caveats and lessons learned

Start and end Fluentpy expressions on each line

If you do not end each fluent statement with a ._, .unwrap or .to(a_type) operation to get a normal Python object back, the wrapper will spread in your runtime image like a virus, 'infecting' more and more objects causing strange side effects. So remember: Always religiously unwrap your objects at the end of a fluent statement, when using fluentpy in bigger projects.

>>> _('foo').uppercase().match('(foo)').group(0)._

It is usually a bad idea to commit wrapped objects to variables. Just unwrap instead. This is especially sensible, since fluent chains have references to all intermediate values, so unwrapping chains give the garbage collector the permission to release all those objects.

Forgetting to unwrap an expression generated from _.each may be a bit surprising, as every call on them just causes more expression generation instead of triggering their effect.

That being said, str() and repr() output of fluent wrapped objects is clearly marked, so this is easy to debug.

Also, not having to unwrap may be perfect for short scripts and especially 'one-off' shell commands. However: Use Fluentpys power wisely!

Split expression chains into multiple lines

Longer fluent call chains are best written on multiple lines. This helps readability and eases commenting on lines (as your code can become very terse this way).

For short chains one line might be fine.

_(open('day1-input.txt')).read().replace('\n','').call(eval)._

For longer chains multiple lines are much cleaner.

day1_input = (
    _(open('day1-input.txt'))
    .readlines()
    .imap(eval)
    ._
)

seen = set()
def havent_seen(number):
    if number in seen:
        return False
    seen.add(number)
    return True

(
    _(day1_input)
    .icycle()
    .iaccumulate()
    .idropwhile(havent_seen)
    .get(0)
    .print()
)

Consider the performance implications of Fluentpy

This library works by creating another instance of its wrapper object for every attribute access, item get or method call you make on an object. Also those objects retain a history chain to all previous wrappers in the chain (to cope with functions that return None).

This means that in tight inner loops, where even allocating one more object would harshly impact the performance of your code, you probably don't want to use fluentpy.

Also (again) this means that you don't want to commit fluent objects to long lived variables, as that could be the source of a major memory leak.

And for everywhere else: go to town! Coding Python in a fluent way can be so much fun!

Famous Last Words

This library tries to do a little of what libraries like underscore or lodash or jQuery do for Javascript. Just provide the missing glue to make the standard library nicer and easier to use. Have fun!

I envision this library to be especially useful in short Python scripts and shell one liners or shell filters, where Python was previously just that little bit too hard to use and prevented you from doing so.

I also really like its use in notebooks or in a python shell to smoothly explore some library, code or concept.

fluent's People

Contributors

cromulentbanana avatar dwt avatar gpkc avatar paw-eloquent-safe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

gpkc javadba

fluent's Issues

Automatic fallback to .self on AttributeError

What is the reasoning for not falling back to self automatically if a wrapped function returns None?

for example,

_([1, 5, 2]).append(6).append(3).sort().call(print)._

would be way nicer than

_([1, 5, 2]).append(6).self.append(3).self.sort().self.call(print)._

Could there be a option to enable such a fallback?

How to convert iterable of tuples to dict?

Hey there, i have a fluentpy expression along these lines

a = [1,2,3,4]
b = "abcd"
_(a).zip(b).print()._

This gives me a iterable of 2-tuples. Obviously, i could wrap this in a dict() call, but that is not really fluent.

dict(_(a).zip(b)._)

Is there a fluent way to do this conversion?

Automatic unwrapping of _.each

@kbauer: I'd like to discuss auto unwrapping seperately - your input is very welcome.

Regarding the auto conversion in map(): I am also constantly annoyed by this, but I am also extremely reluctant to change this, mainly for these reasons:

  1. _.each should behave the same for all the other iterators like .each(), .filter() and friends
  2. _.each should behave consistent in all contexts if possible
_(dict(foo='bar')).call(_.each['foo'])._ == 'bar'
  1. It should be possible to abstract over _.each with .map() and friends. Like this:
_(['foo', 'bar', 'baz']).map(_.each).map(lambda each: each._).map(_.each(print)._)

Now that I think about it, it is really hard to come up with a useful example, where one would want to abstract over _.each with the iterators. Maybe it would be good enough to have an off switch for auto termination that one would have to explicitly set on _.each. Maybe something like _.each._disable_auto_termination() (now that is ugly, but also unlikely to clash with any operation you want to do on _.each)

I am really not sure. I know that it is impossible to really fix _.each auto termination in all circumstances, as you can always wrap in a dict or something to make it invisible to wrapper().

But maybe 'most of the time' is actually good enough to make a difference?

As for a migration, we could start by adding a warning to all wrappers if you hand in a unterminated _.each and ask users to report if they see it. That way we get a feel if there is actual usage of this feature.

Better subprocess integration

pipe Will try, but it took me days just to answer to your comments :/ For now I am just attaching my experient (pipe.py.txt).

Regarding sh: It looks quite interesting, and most of my scripting is Unix anyway (or WSL if on Windows). It is however somewhat inconvenient to use with fluentpy.

_(range(10)).map(lambda it: str(it)+"\n").call(lambda it: sh.sed("s/^/>> /", _in=it)).to(list)
                                                      ^^                         ^^

In scripting contexts, something like

_(strings).sh.sed("s/^/>> /").to(list)

would be preferable. As it is, that would make sh a thing wrapper to the sh module, that sets _in. And maybe for consistency some ish, that also sets _iter=True...

The downside would be, that it would introduce a feature that doesn't work on Windows. My popen based experiment is platform independent by contrast. Will look into it more.

Originally posted by @kbauer in #6 (comment)

New Features on _.each

Ah, all clear regarding the missing comment. But yes, I would like to add as many overloads as are possible with _.each - so keep coming back if you find something that can be implemented.

With some delay... :)

_.each

  • __rmul__ to support 2*_.each. Similar for other __r...__ functions, see e.g. _(dir(int)).filter(lambda it: it.startswith("__r")).to(list).
  • Invoking methods on _.each doesn't work as expected, e.g. _(dir(int)).filter(_.each.startswith("__r")).to(list) doesn't filter anything out.
  • The ,$,| operators could be overloaded as eager boolean operators (like numpy does). Though precedence rules are an issue, it would allow writing _(range(10)).filter( (_.each > 3) & (_.each < 7) ).to(list).

Prefices

Regarding the prefixes: I really don't know what the best way here would be. I kind of like that the default case is not lazy and I find the I prefix intuitive (though python2-ish). At the same time I haven't found a good prefix for a non lazy variant.

That being said - I tried at least to be internally consistent, but am of course open to suggestions.

"Iterator by default" can be annoying for debugging, since lazy evaluation often doesn't produce useful stack traces.
"Iterator by default" can be annoying for debugging, because it delays the failure. In

import fluentpy as _

def fail(value):
    assert False, "fail"

items = _(range(10)).imap(fail)

for item in items:
    print(item)

the line where items is defined doesn't appear in the backtrace while with map it would. So, for my personal use-cases your current choice almost always will be preferable actually.

On the other hand, making eager evaluation the default means that code written with fluentpy will be brittle. The original purpose of a function might work well with _(...).filter, but will suddenly fail if the passed iterable is large or infinite.

Documentation (1)

It could be useful to point out, that the wrappers can be used as decorators to side-step the limited features of lambdas. E.g.

items = _(range(5))
@items.call
def items(its):
    for it in its:
        yield it
        yield it
print(items.to(list))

This would be yet another advantage of the suffix notation for filtering effects...

Documentation (2)

Ad-hoc, I thought it might be useful to provide custom extensions, e.g. let's say implementing a .groupby that groups non-consecutive elements. Until I realized, that this is covered by .call. It might be worth pointing this out explicitly.

Subprocess utility

A wrapper around subprocess.Popen would be useful, that allows to do something like

lines = _(["hello","world"]).pipe(["sed", "s/l/x/g"]).to(list)

It can be done with .call and a function wrapping around subprocess.Popen; In order to prevent deadlocks, it requires multiple threads though; I had some success building a function that allowed _(...).call(pipe(...)).... by having a thread feed the input iterator to the Popen.stdin and returning an iterator over Popen.stdout.

Originally posted by @kbauer in #5 (comment)

Thank you! I needed this fluent interface library you can not believe how much.

Feel free to delete this issue as spam.. I am just so relieved this library exists. I have spent days searching for and trying out alternatives - such as the pipe library (and even contributed to it) and using composition using functools.reduce. This is just way better. Oh - and the lib to import all of python .. man you're a genius.

New release?

Last official release was twelve months back - and you have done quite a bit of work since then. While I can git clone and install locally it would be helpful to have a pip install able for distributions.

Possible solution to "call()"

Howdy,

Thanks for a great library. For various reasons, I decided to write my own version of it, but it was heavily influenced by your work.

In my version, I was able to fix the "call()" issue you mention in your readme. Specifically, if I want to call the method foo on each element in my iterable, I can do
from_(myit).map(each.foo())
rather than
from_(myit).map(each.foo.call)

For example, here, I can get the backing numpy array from every axes in a figure and concatenate them:

from_(plt.gcf().axes)\
    .map(each.images[0].get_data())\
    .map(each.reshape(1,512,512))\
    .to(np.vstack)

My code is attached. Hope this helps!
fluentutil.py.txt

python 3.12 import fails

=============================================================
(python environment governed via pyenv : here : use globally installed 3.12

$ pip show fluentpy
Name: fluentpy
Version: 2.1.1
Summary: Python wrapper for stdlib (and your) objects to give them a fluent interface.
Home-page: https://github.com/dwt/fluent
Author: Martin Häcker
Author-email: [email protected]
License: ISC
Location: /usr/local/lib/python3.12/site-packages
Requires:
Required-by:

$ python
Python 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 13.2.1 20240316 (Red Hat 13.2.1-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.

import fluentpy as _
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.12/site-packages/fluentpy/init.py", line 36, in
import fluentpy.wrapper
File "/usr/local/lib/python3.12/site-packages/fluentpy/wrapper.py", line 112, in
class Wrapper(object):
File "/usr/local/lib/python3.12/site-packages/fluentpy/wrapper.py", line 303, in Wrapper
type = wrapped(type)
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fluentpy/wrapper.py", line 71, in wrapped
@functools.wraps(wrapped_function)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/functools.py", line 56, in update_wrapper
setattr(wrapper, attr, value)
TypeError: type_params must be set to a tuple

=============================================================
(python environment governed via pyenv : here : use locally installed 3.11)

$ pip show fluentpy
Name: fluentpy
Version: 2.1.1
Summary: Python wrapper for stdlib (and your) objects to give them a fluent interface.
Home-page: https://github.com/dwt/fluent
Author: Martin Häcker
Author-email: [email protected]
License: ISC
Location: /home/rotten/.pyenv/versions/3.11.3/lib/python3.11/site-packages
Requires:
Required-by:

$ python
Python 3.11.3 (main, Apr 30 2024, 19:38:50) [GCC 13.2.1 20240316 (Red Hat 13.2.1-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.

import fluentpy as _

=============================================================
what to do ?
in need of any further information ?

thx
w.

Typehinting for IDE suggestions.

Hey there,

i love the idea of this library. One issue though: there are no IDE autocomplete suggestions when using this.

I managed to get rudimentary suggestions to work by changing wrap to this:

WrappedT = typing.TypeVar('WrappedT')

def wrap(wrapped: WrappedT, *, previous=None) -> WrappedT:

sadly, this means i have to from fluentpy.wrapper import wrap as _ since type checkers dont understand the concept of a callable module, at least not the way you've done it. on the upside, i get autocomplete suggestions!

Would it be possible to add this to the library, maybe a bit more extensive to be able to see the Wrapper methods like .each? Maybe there's even a way to get it to work with the direct module import?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.