ilayn / harold Goto Github PK
View Code? Open in Web Editor NEWAn open-source systems and controls toolbox for Python3
License: MIT License
An open-source systems and controls toolbox for Python3
License: MIT License
It's not blocking but especially for static gains
bode_plot(State(np.eye(2)))
gives RuntimeWarning: divide by zero encountered in log10
warning when bodemag part is calculated.
I ran into an issue while using another package which uses harold, I could solve it by changing to D = np.empty((p, m), dtype=float)
Line 2883 in 90a785b
It basically implements the textbook definition. Implement the Newton + line search for both care
and dare
(with proper function names if possible). Slyvester and Lyapunov solvers use LAPACK versions hence they are OK on paper.
Reminder : For control purposes, speed is irrelevant! So being implemented in Fortran doesn't mean much. The residuals and conditioning of the variables decide whether certain synthesis algorithms work or not. This needs to be state-of-art without relying on anything off-the-shelf.
The x label 'Time' and y label 'Amplitude' are being pushed outside the canvas while plotting using step_response_plot
from IDLE. But looks fine in Jupyter notebook.
A RecursionError is thrown if we try to multiply a statespace MIMO (Multi input single output or Single input Multi output ) model with a statespace/Transfer SISO model. This issue can be worked around by converting the MIMO model to a Transfer object before multiplying.
G1 = Transfer(
[
[ [1], [1] ]
], # end of num
[1,2,1] # common den
)
G2 = transfer_to_state(G1)
G3 = Transfer([1], [1,0]) #Pure integrator
G4 = G1 * G3 will work, but G5 = G2 * G3 will cause a RecursionError.
G1 = Transfer(
[
[1],
[1]
], # end of num
[1,2,1] # common den
)
G2 = transfer_to_state(G1)
G3 = Transfer([1], [1,0]) #Pure integrator
G4 = G1 * G3 will work, but G5 = G2 * G3 will cause a RecursionError.
Just to put in the record that this is being fixed.
Hi! This is a simple little bug. If you call simulate_impulse_response with an argument for the time vector, the internal variable "ts" never gets created. Then, this line fails:
u[0] = 1./ts
For fun, here is a complete example that mimics my real problem:
import numpy as np
import matplotlib.pyplot as plt
import harold
dt = 1.0 / 50.0
# define 10 Hz filter with 0.7 damping:
w = 2 * np.pi * 10.0
num = w**2
den = [1.0, 2*0.7*w, w**2]
filt = harold.Transfer(num, den)
ss = harold.transfer_to_state(filt)
ssd = harold.discretize(ss, dt, 'foh')
t = np.arange(0., 1., dt)
y, t_ = harold.simulate_impulse_response(ssd, t)
# y, t_ = harold.simulate_impulse_response(ssd)
plt.plot(t_, y)
ERROR: Ignored the following versions that require a different python version: 1.0.2 Requires-Python >=3.8,<3.11
Please add support for Python 3.11
I am working with MIMO systems analysis. I tried to use harold library to convert a transfer function matrix to a state space realization and found out that there are some problems in this routine.
I attached a jupyter notebook file that reproduces my experiment. I compared the results from harold with python control (and also Matlab, which yelds the same results from python control).
I didn't get the time to read into your code, but due to this issue, the transmission zeros calculation for a MIMO transfer function matrix is getting wrong values as it seems to depend on the state space realization.
There are too many cases where bokeh wouldn't work (terminals, IDEs so on) but it is so shiny. I need to get on with it.
When a SISO Transfer()
or State()
multiplied with a p x m
matrix, should it reject due to size mismatch or elementwise multiply with each element of the matrix P
as if an overloaded Kronecker Product kron(P,G)
?
There is no discretization test implemented for MIMO discretize stuff. Also rediscretize is still empty. Either remove or finish.
It is the most important entry point for possible tinkerers and contributors. Hence, the mostest highestest priority.
Has anyone ever needed elementwise multiplication of State representations?
In matlab, .*
operator is overloaded only for MIMO Transfer representations but not supported for MIMO State representations.
I've implemented this already anyways but I'm not sure whether it would create more confusion between *
and @
operators.
Note: In Python and NumPy, *
denotes elementwise multiplication (matlab .*
behavior) and @
denotes the matrix multiplication (matlab *
behavior).
Though I strongly think that root locus belongs to the previous century, people keep pushing it to students. It is literally 5 lines of jupyter notebook slider widget but control curriculum is still playing in the 80s mud. Hence no point in resisting as everybody asks for it.
Currently, the State
and Transfer
models print out the pre-designed strings. But different users have different needs, for example the academically oriented users prefer more about the stability properties (transmission zeros etc.) and application oriented people more interested in system properties (damping, bandwidth alike)
There should be an entry point for sticking in a custom argument for repr
If a transfer matrix involves scalars, e.g.,
[ | 1 ]
G = [ 1 | ----- ]
[ | s+1 ]
after the separation of the feed-through, still treated as a polynomial entry and looks for least common denominators.
Extra check is necessary for empty dynamics in entries.
Moreover, tf(5)
, ss(5)
still causes problems.
Needs a proper gateway to be written to account for these.
Hi!
I'm getting an error trying to create a discrete MISO transfer function. It easiest to show via code. This works:
import harold
print(harold.__version__)
print(harold.Transfer([[[1.0, 0.0]]], [[[1.0, 0.0]]], 0.02).polynomials)
Output:
1.0.2.dev0+517e57f
(array([[1., 0.]]), array([[1., 0.]]))
However, this does not work:
print(harold.Transfer([[[1.0], [1.0, 0.0]]], [[[1.0], [1.0, 0.0]]], 0.02).polynomials)
Output:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-4-e546dfb4f959> in <module>
----> 1 print(harold.Transfer([[[1.0], [1.0, 0.0]]], [[[1.0], [1.0, 0.0]]], 0.02).polynomials)
~/anaconda3/envs/py38/lib/python3.8/site-packages/harold-1.0.2.dev0+517e57f-py3.8.egg/harold/_classes.py in __init__(self, num, den, dt)
70 self._isdiscrete = False if dt is None else True
71
---> 72 self._recalc()
73
74 @property
~/anaconda3/envs/py38/lib/python3.8/site-packages/harold-1.0.2.dev0+517e57f-py3.8.egg/harold/_classes.py in _recalc(self)
301 else:
302 # Create a dummy statespace and check the zeros there
--> 303 zzz = transfer_to_state((self._num, self._den),
304 output='matrices')
305 self.zeros = transmission_zeros(*zzz)
~/anaconda3/envs/py38/lib/python3.8/site-packages/harold-1.0.2.dev0+517e57f-py3.8.egg/harold/_classes.py in transfer_to_state(G, output)
2974 A = haroldcompanion(den[0][0])
2975 B = np.zeros((A.shape[0], 1), dtype=float)
-> 2976 B[-1, 0] = 1.
2977 t1, t2 = A, B
2978
IndexError: index -1 is out of bounds for axis 0 with size 0
In matlab for example, every now and then, we use
F = tf(G);
n,d = tfdata
Here, harold should have an option to select whether the individual system components or the system itself should be returned.
F = statetotransfer(G) # should give a genuine Transfer() object
n,d = statetotransfer(G,only_data=True) # should give the num and den entries instead
Hi Ilhan,
This issue does not pertain to any specific problem, but I could not find a better way to contact you. Perhaps your email address [email protected] (taken from one of your published IEEE papers) is more appropriate, but I'm not sure.
Anyway, onto my question. I read your comment here on the scicomp.stackexchange.com site, and I found it to be very helpful in my research (I'm currently a PhD student). I have a question that is somewhat related to this comment that I posted here, and I would appreciate any help that you can offer. If you do not have the time, I completely understand.
Kind regards,
Mahmoud Abdelkhalek
The new addition of static gain handling goes awry because this is not handled properly during the intake of the arithmetic operations. The tests are written however implementation shows major rewrite issues.
Hi!
Adding (110 s) / (85 s^2 + 20 s + 1)
and 0.25
gives an incorrect result, but it works fine if the first transfer function is normalized. Here is a little example ... it shows that the numerator never gets divided by 85:
import numpy as np
import harold
num = np.array([110.0, 0.0])
den = np.array([85.0, 20.0, 1.0])
h_a = harold.Transfer(num, den)
h_b = harold.Transfer(num / 85.0, den / 85.0)
h2 = harold.Transfer(0.25, 1.0)
h_sum_a = h_a + h2
h_sum_b = h_b + h2
print(f"h_sum_a.num = {h_sum_a.num}")
print(f"h_sum_a.den = {h_sum_a.den}")
print()
print(f"h_sum_b.num = {h_sum_b.num}")
print(f"h_sum_b.den = {h_sum_b.den}")
The output is:
h_sum_a.num = [[2.50000000e-01 1.10058824e+02 2.94117647e-03]]
h_sum_a.den = [[1. 0.23529412 0.01176471]]
h_sum_b.num = [[0.25 1.35294118 0.00294118]]
h_sum_b.den = [[1. 0.23529412 0.01176471]]
Why this division by sampling time? It's causing a bug in the case of discrete system with sampling time other than 1. I am using impulse response of system for FFT convolve. This division is causing wrong DC gain. Currently I am multiplying sample time to the result of this function to correct DC gain. Is this division really required?
Line 252 in 2bfa00f
Hi again! :-)
I think the following code should work, but I'm getting an exception:
import harold
print(harold.__version__)
tf = (harold.Transfer([[[0.0], [1.0]]], [[[1.0], [1.0]]], 0.02)
+ harold.Transfer([[[1.0], [0.5]]], [[[1.0], [1.0]]], 0.02))
print(tf.polynomials)
Here is the output:
1.0.2.dev0+90a785b
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~/anaconda3/envs/py38/lib/python3.8/site-packages/harold-1.0.2.dev0+90a785b-py3.8.egg/harold/_classes.py in __add__(self, other)
404 try:
--> 405 return Transfer(self.to_array() + other.to_array(),
406 dt=self._dt)
~/anaconda3/envs/py38/lib/python3.8/site-packages/harold-1.0.2.dev0+90a785b-py3.8.egg/harold/_classes.py in __init__(self, num, den, dt)
64 (self._num, self._den,
---> 65 self._shape, self._isgain) = self.validate_arguments(num, den)
66 self._p, self._m = self._shape
~/anaconda3/envs/py38/lib/python3.8/site-packages/harold-1.0.2.dev0+90a785b-py3.8.egg/harold/_classes.py in validate_arguments(num, den, verbose)
1504 if returned_numden_list[0].size > returned_numden_list[1].size:
-> 1505 raise ValueError('Noncausal transfer functions are not '
1506 'allowed.')
ValueError: Noncausal transfer functions are not allowed.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-1-fe5c19b20276> in <module>
1 import harold
2 print(harold.__version__)
----> 3 tf = (harold.Transfer([[[0.0], [1.0]]], [[[1.0], [1.0]]], 0.02)
4 + harold.Transfer([[[1.0], [0.5]]], [[[1.0], [1.0]]], 0.02))
5 print(tf.polynomials)
~/anaconda3/envs/py38/lib/python3.8/site-packages/harold-1.0.2.dev0+90a785b-py3.8.egg/harold/_classes.py in __add__(self, other)
406 dt=self._dt)
407 except ValueError:
--> 408 raise ValueError('Shapes are not compatible for '
409 'addition. Model shapes are {0} and'
410 ' {1}'.format(self._shape, other.shape))
ValueError: Shapes are not compatible for addition. Model shapes are (1, 2) and (1, 2)
Hi!
I'm getting an error trying to create a transfer function. Below is a little example that shows the error on my computer. I'm thinking that somehow a numeric leading "zero" shows up (it was -8.3948810935738183e-17 for me) and is not trimmed off by np.trim_zeros
. That creates sizing trouble when assembling the state-space matrices.
import harold
# set up a 2-input, 1-output transfer function
# denominator is the same for both transfer functions:
den = [[[[84.64, 18.4, 1.0]], [[1.0, 7.2, 144.0]]]]
# - same as below except last 4 digits chopped off for each number
num = [
[
[[61.7973249220, 36.2498843026, 0.730119623369]],
[[0.037784067405, 0.997499379512, 21.76362282573]],
]
]
# this one works:
tf1 = harold.Transfer(num, den)
print(tf1)
# keep those last 4 digits and it breaks:
num = [
[
[[61.79732492202783, 36.24988430260625, 0.7301196233698941]],
[[0.0377840674057878, 0.9974993795127982, 21.763622825733773]],
]
]
tf2 = harold.Transfer(num, den)
print(tf2)
And here is the output:
Continuous-Time Transfer function
2 inputs and 1 output
Poles(real) Poles(imag) Zeros(real) Zeros(imag)
------------- ------------- ------------- -------------
-0.108696 1.16214e-07
-0.108696 -1.16214e-07
-3.6 11.4473
-3.6 -11.4473
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~/code/harold/bug.py in <module>
27 ]
28
---> 29 tf2 = harold.Transfer(num, den)
30 print(tf2)
~/code/harold/harold/_classes.py in __init__(self, num, den, dt)
70 self._isdiscrete = False if dt is None else True
71
---> 72 self._recalc()
73
74 @property
~/code/harold/harold/_classes.py in _recalc(self)
302 # Create a dummy statespace and check the zeros there
303 zzz = transfer_to_state((self._num, self._den),
--> 304 output='matrices')
305 self.zeros = transmission_zeros(*zzz)
306 self.poles = eigvals(zzz[0])
~/code/harold/harold/_classes.py in transfer_to_state(G, output)
3036
3037 for row in range(p):
-> 3038 C[row, k:k+num[row][col].size] = num[row][col][0, ::-1]
3039 k += coldegrees[col]
3040
ValueError: could not broadcast input array from shape (5) into shape (4)
The package was never installable though it passed all the virtual mambojambos and test runs. Apparently Python packaging is not designed for usability.
Until I fix this nothing is going anywhere.
A PR for the fix is already committed to SciPy scipy/scipy#8178. In the meantime modify the tests that avoids QZ convergence failures.
Sum of multiple higher order transferfunction results in a 'ValueError: Noncausal transfer functions are not allowed.' I have written PolyFrac class and do arithmetic then convert to Transfer object works fine!
The below code can be used to reproduce the issue.
import numpy as np
import harold
class PolyFrac():
def __init__(self, num, den) -> None:
self.num = num
self.den = den
self._num = np.polynomial.Polynomial(self.num[::-1])
self._den = np.polynomial.Polynomial(self.den[::-1])
def __add__(self, other):
self._rnum = self._num * other._den + self._den * other._num
self._rden = self._den * other._den
return PolyFrac(self._rnum.coef[::-1], self._rden.coef[::-1])
num = [0.00048873990334,0.00008022868673,0.000004688964039,0.000000119457633,0.000000001314395,0.00000000000487 ]
den = [713971.3912200001,157343.82838884002,14104.582786048937,663.9560606101594,17.793303846830465,0.27893627449232,0.002591707015312,
0.00001456551703,0.000000046427704,0.000000000061138]
G1 = PolyFrac(num,den)
Gh1 = harold.Transfer(num, den)
num = [0.000248910547815,0.000040859701097,0.000002388044438,0.000000060838627,0.000000000669409,0.00000000000248]
den = [468539.19521279994,104086.4548127616,9436.270955826356,451.5149614339922,12.39566355720315,0.201316286426998,0.001963464678425,
0.000011684559727,0.000000040217318,0.000000000061138]
G2 = PolyFrac(num,den)
Gh2 = harold.Transfer(num, den)
num = [0.000589835421362,0.000096823936248,0.000005658873075,0.000000144167363,0.000000001586277,0.000000000005877]
den = [2416.14684,524.28995448,45.95699773796595,2.091367387046341,0.053136840811452,0.000764126679123,0.000006184705029,0.000000028361462,
0.000000000061138]
G3 = PolyFrac(num,den)
Gh3 = harold.Transfer(num, den)
G4 = G1+G2+G3
Gh4 = Gh1+Gh2+Gh3
G = harold.Transfer(G4.num, G4.den)
print(G.dcgain)
print(Gh4.dcgain)
For the transfer multiplications, tests are performed and thanks to lcm and gcd stuff the order does not grow as drastically as is for matlab.
For State there is no minimality guarantee at the outset. Hence a minimalization step is necessary.
Maybe an option to automate that as a key as in G = F**H
that results in simplify=True
? Needs a proper infix operator for this **
is just a placeholder.
As much as I like it, it is an unnecessary dependency that takes very little part of harold.
This is already dropped in accordance with Astrom, Murray for a very good reason. See for example python-control/python-control#46
A step further than this is not using decibels at all since we don't have any matlab legacy. Plus, currently the plots are logarithmic but the tooltips are in absolute. Hence visually it has the log flavor but the value can be read properly without using a strange interim unit. That makes one less dropdown menu around the plots. But I am open for counter-arguments.
matlab's 4 level of granularity is not optimal for freqresp computation. The scheme that harold
implements is quite better but that is also not optimal. Especially when two points of interest are close to each other it doesn't handle well the distinction.
Currently it works as finding the region of interest and increasing the level of detail around the poles and zeros.
This is a pretty academic problem and I'll see if I can convert it to a conference paper. Since we have bokeh based interactive plots, the complexity ~O(n^2+n) is still too expensive. I need to find a way to reduce it further.
If someone has a good implementation with proper analysis of the complexity, I'm all ears.
Currently MIMO transfer num den data is kept in list of lists. This is good for quick access with different lengths of entries. For example, if some element has only 1
and the other element has s^3 + 5s^2 - 4s -7
and they can be kept as [[1], [1, 5, -4, -7]
. But for some operations walking over list of lists is too slow and inconvenient. For example adding a 3D zero padded numpy array is much easier to multiply with a scalar etc.
A MIMO transfer object can hold both the list of lists and also the 3D array version. Hence add NumPy arrays.
Current docstring contents are not sufficient. Every function, regardless of trivialness, require at least one usage example.
harold Import error when using scipy 1.8.0
Line 7 in 2bfa00f
Including harold package leads to the following error when running a script:
File "C:\Users\michele.franzan\AppData\Local\Programs\Python\Python310\lib\site-packages\harold\__init__.py", line 30, in <module>
from ._classes import *
File "C:\Users\michele.franzan\AppData\Local\Programs\Python\Python310\lib\site-packages\harold\_classes.py", line 6, in <module>
from scipy.linalg.decomp import _asarray_validated
ImportError: cannot import name '_asarray_validated' from 'scipy.linalg.decomp' (C:\Users\michele.franzan\AppData\Local\Programs\Python\Python310\lib\site-packages\scipy\linalg\decomp.py)
I've found out that the _asarray_validated() function is included in scipy._lib._util instead of scipy.linalg.decomp.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.