exxeleron / enterprise-components Goto Github PK
View Code? Open in Web Editor NEWEnterprise Components for kdb+
License: Apache License 2.0
Enterprise Components for kdb+
License: Apache License 2.0
In replay process data models for tables are loaded in wrong order. Data models that are assign to given process names should be loaded before replaying journal.
eodMng
configured via sync.cfg
file is failing during configuration loading
ERROR 2015.01.13 10:39:12.505 eodMng- .cr.loadSyncCfg received (), failed with [type]
format
column in the .hdb.status[]
contains PARITIONED
instead of PARTITIONED
User request: global settings for loading common q libraries and/or qsd files to all processes or to a group of processes.
If there are groups in the config file that don't have users assigned those are
not included in the ufiles generation process. Same for users that are assigned
to groups that don't exist.
Hi!
At first I want to say: great framework exxeleron team! thank you for open-sourcing this.
Im a beginner in KDB+/Q and would like to know: how can I define keys for a keyed table in dataflow.cfg? Or should I just do something like xkeys mykey
in every component that works with the table? Also, can I get rid of the time and
sym column requirements for tables using the tickHF feed?
Tried to set up and run the tutorial on a Windows 10 x64 machine but stucked at running the yak command.
D:\KDBEnterpriceComponent\DemoSystem>yak restart *
Traceback (most recent call last):
File "", line 6, in
File "main.py", line 128, in
File "main__yak.py", line 30, in
File "osutil/init.py", line 46, in
File "osutil/_win32.py", line 17, in
File "psutil/init.py", line 110, in
File "psutil/_pswindows.py", line 16, in
File "psutil/_psutil_windows.py", line 26, in
File "psutil/_psutil_windows.py", line 17, in _bbfreeze_import_dynamic_module
ImportError: DLL load failed: The operating system cannot run %1.
The error message seems to indicate that the pstuil library failed to be loaded. Any idea?
Most of the functionality is portable. What is missing for a beta release is setting the environment, Windows specific commands in the tutorial and perhaps a list of features that currently don't work on Windows (if any).
hdb location (for generating [sysHdbSummary]
table) is extracted from configuration field dataPath
instead of cfg.hdbPath
field.
In case of the default hdb
configuration dataPath
=cfg.hdbPath
, however this it is not always the case.
Table [sysDiskUsage]
generated by the monitor
component is missing frequency
field defined in the monitor.qsd
file (analogical to the other monitor
tables).
hk
throws error even if parameters for find
command are given properly.
If process was never started and for example it's missing log/
directory, hk should throw a meaningful warning.
At the end of the day each table is by default sorted by sym column in order to apply p#
attribute.
Sorting can be disabled using performSort=FALSE
. This might be required for a very large tables as sorting is time and resources consuming. Note that with the sorting disabled the p#
attribute will not be applied in the destination hdb.
When the the .os.find does not find any files, it should return an empty list of symbols. On Linux, it gnerates a `type signal instead.
Add support for subscription to tables that are published via tickLF component. Subscribed tables will be stored in global namespace. For tickLF subscription there is need to use field srcTickLF
in dataflow.cfg.
Usage example:
[table:universe]
[[stream.mrvs]]
srcTickLF = in.tickLF
Extend validation of the servers
argument passed to the .hnd.hopen[]
.
Currently there is only validation of the data type of servers
argument.
Content of the servers
argument should be now also valideted. Unknown server names should be reported back to the user.
Initial log init.log
of the component was deleted by the housekeeping script due to fact that timestamp of the file was never changed. Simple touch
solves the problem and it's called when log is rotated. Some systems don't support touch, then link is removed and put back.
The sl components maintains links to the initial log and the current log when doing log rotation. On Windows this requires administrator privileges. Making those links probably needs to be replaced by special names for the init and current log.
.stream.pub
throws 'type
error if invalid handle is used for data publishing. Invalid handle is not removed from dictionary .u.w
if subscribed process is terminated.
Under heavy load when the last timer is removed the .tmr.p.recalcGcd is called with empty .tmr.status which causes error with message "attempt to calculate a gcd of empty list".
Parameter serverSrc
used for subscription is now deprecated. New field subSrc
should be used instead.
Currently os.q assigns Linux commands when run on Mac, which works ok except for tar and find commands that are different than their GNU equivalents.
Function .monitor.p.getRemoteFuncList
throws 'length
signal in case if process name configured in dataflow.cfg
is not defined in system.cfg
, e.g. dataflow.cfg
.:
[sysTable:sysFuncSummary]
template = adminStatsTable
[[admin.monitor]]
execTime = 03:00:00
procList = access.ap
procNs = .demo
and configuration for access.ap
is missing in system.cfg.
In this case .monitor.p.getRemoteFuncList
should throw meaningful warning.
An operating mode where publishers and subscribers could interact with tables whose names dist
isn’t statically aware of at start-up. This behavior could be governed by a configuration flag.
Character "?" is curretntly not allowed in the fields of type STRING in the configuration. There is no reason to not to allow this. "?" is useful e.g. in the file patterns in the housekeeping functionality.
When starting replay.q following error is observed:
console batch.admin_replay -a"-date 2014.12.17 -rdb ofp.rdb"
Error:
{[name]
if[10h=type name; name:$name];
name:.sl.p.chopOff[name;".q"];
if[name in .sl.p.libs;.log.info[`sl](string name),".q library already loaded";:()];
.log.info[sl] "Loading ",(string name),".q library...";
.log.p.offset+:2;
if[0<count p:.sl.p.lib[.sl.p.appendName[name;".q"] each .sl.p.slash .sl.libpath;(string name),".q";.log.error[sl]];
.sl.p.libs,:name;
.log.p.offset-:2;
.log.info[sl] p," loaded";
:();
];
.log.p.offset-:2;
:();
}
'.sl.p.libs
refreshUFiles throws warning even if access.cfg is correctly filled:
WARN 2016.06.14 08:14:09.024 ru - Some processes () provide additional users to a "shared" user files ().
Interface function .cr.getByProc
throws 'type
error if section [group] is defined in system.cfg but without any [[subsection]]
, .e.g:
[group:test]
# [[process.name]]
# command = "q test"
# type:q
# ...
Some parsec clones like ex_parsec (Elixir) and Perl-Parser-Combinators provide a very useful sequence
parser combinator. This combinator creates a parser from a provided list of parsers by parsing them in sequence and then returning the list of results if none of them fail. This combinator is currently missing in the parseq.q library.
Function -11!(-2;logFile) in case of invalid log file returns two numbers:
Function to replay data expects only number of chunks, which is returned when log file is not corrupted.
.os.cpdir
in .hdbHk.plug.action[``compress]
for copying files to compress is not properly used
Table sysFuncSummary
contains amount of user functions for given process. Functions that don’t exist on the server should not be reported as an error. Such statistics should be inserted into sysFuncSummary
with 0 count.
Breaks Windows support. The process is terminated and yak err gives "C: ... Unknown host".
Current implementation of .monitor.p.backupEvent
supports only event backup on Linux:
.monitor.p.backupEvent:{[eventDir;file;backupDir]
...
system "mv ", eventDir,string[file], " ",backupDir;
};
os.q
library should be used to fix this issue:
.monitor.p.backupEvent:{[eventDir;file;backupDir]
...
.os.move[eventDir,string[file]; backupDir];
};
Additional on Windows in event.q
path to directory with events:EC_EVENT_PATH
should contain only "/":
.event.p.init:{[]
...
eventPath:ssr[getenv[```EC_EVENT_PATH];"\\";"/"];
};
In the current design, all .csv files uploaded with feedCsv have to contain all nulls separated by corresponding separator defined in dataflow.cfg, i.e. row with six null values (and semicolon as separator) has to be defined in the file as:
;;;;;
Files with empty lines (or lines with not enough separators) are treated as corrupted.
The additional variable which specifies how to decode null values needs to be added to dataflow.qsd/dataflow.cfg which enables to read empty lines (lines with no separator). If the variable is not defined, current logic is sustained.
dist
component is signaling error message 'type if no table was selected in the dataflow.cfg
for the given instance.
Add support for custom configuration fields which can be specified at the top level of the system.cfg
file.
Declaration of custom configuration fields should be specified in the custom qsd file.
Mentioned custom qsd file can be loaded via commonLibs
entry in the system.cfg
file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.