css-electronics / api-examples Goto Github PK
View Code? Open in Web Editor NEWCANedge API Examples for MDF4/S3
License: MIT License
CANedge API Examples for MDF4/S3
License: MIT License
when some files are in the conversion process I get the following error I have read the documentation of the library but I can't find the error
Device: 4F0BBBD2 | Log file: /LOG/958D2219/00002501/00000001-63D2969F.MF4 [Extracted 127276 decoded frames]
Period: 2023-01-26 14:57:17.962700+00:00 - 2023-01-26 15:04:02.828300+00:00
Traceback (most recent call last):
File "C:\Users\hugoa\OneDrive\Escritorio\Telemetria\Decoder MF4\Decoder-MF4\data-processing\process_data.py", line 52, in
df_phys_join = restructure_data(df_phys=df_phys_all, res="1S", full_col_names=True)
File "C:\Users\hugoa\OneDrive\Escritorio\Telemetria\Decoder MF4\Decoder-MF4\data-processing\utils.py", line 100, in restructure_data
df_phys_join = pd.merge_ordered(
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 321, in merge_ordered
result = _merger(left, right)
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 290, in _merger
op = _OrderedMerge(
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 1623, in init
_MergeOperation.init(
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 703, in init
self._maybe_coerce_merge_keys()
File "C:\Users\hugoa\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\reshape\merge.py", line 1262, in _maybe_coerce_merge_keys
raise ValueError(msg)
ValueError: You are trying to merge on float64 and datetime64[ns, UTC] columns. If you wish to proceed you should use pd.concat
Currently DBC files are applied across the entire log file. An improvement could be to enable parsing a DBC object ala below:
dbc_paths = {"CAN": [("dbc_files/CSS-Electronics-SAE-J1939-DEMO.dbc", 0)]]}
Here, 0
would refer to the DBC being applied across all channels, 1
would mean only for Bus Channel 1, 2
would mean only for Bus Channel 2.
This requires a modification to the logic in load_dbc_files
and extract_phys
.
It seems the resampling argument is hardcoded to "1S" instead of using "res"?
The problem is also present in "dashboard-writer"-repo, "utils.py".
How can I get turn signal from can bus with your code? Thank you in advance!
Hi,
I am trying to load and decode an MF4 file using the python API. In the MATLAB API, the index of the CAN channel is passed to the read
method:
can_idx = 8;
rawTimeTable = read(m,can_idx,m.ChannelNames{can_idx});
I could not find out how to do the same thing when using the python API. Currently my code looks like this:
db = can_decoder.load_dbc(pathToDBC)
df_decoder = can_decoder.DataFrameDecoder(db)
with open(pathToMF4, "rb") as handle:
mdf_file = mdf_iter.MdfFile(handle)
df_raw = mdf_file.get_data_frame()
df = df_decoder.decode_frame(df_raw)
This does decode the data, but the dataframe is not ordered in groups like when using MATLAB. Is it possible to do this with the python API?
Hello CSS team,
First of all, let me state that i am amateur in Python and coding stuff.
But recently I have tried to set up server and dashboard on local network (localy hosted) on our raspberry pi 4B at our workplace.
I have managed to set up local minio server (CanEdge2 succesfully sent data to the server.), influxdb writer and grafana dashboard but recently hit an obstacle when I tried to install mdf-iter 2.0.5 .
From what I see in download files the package is compiled only for x86 architecture.
Is there any way how to install this package to arm architecture, or how to compile it to it?
Thank you in advance
best regards
Tomas
It seems kinda out of place that the method "restructure_data()" outputs a .csv file.
In the example "process_data.py", there is a specific call to .to_csv() after restructure_data(), resulting in a write and then an overwrite of the .csv file:
"process_data.py":
df_phys_join = restructure_data(df_phys=df_phys_all, res="1S") ### Writes .csv file
df_phys_join.to_csv("output_joined.csv") ### Overwrites the above csv file
Same issue in repo "dashboard-writer"
1.7.1 recorded mdf:
(env) user@vm$ python process_tp_data.py
Traceback (most recent call last):
File "/home/user/Downloads/api-examples-1.0.9/examples/data-processing/process_tp_data.py", line 38, in <module>
process_tp_example(devices, dbc_paths, "uds")
File "/home/user/Downloads/api-examples-1.0.9/examples/data-processing/process_tp_data.py", line 17, in process_tp_example
df_raw, device_id = proc.get_raw_data(log_file)
File "/home/user/Downloads/api-examples-1.0.9/examples/data-processing/utils.py", line 230, in get_raw_data
df_raw = mdf_file.get_data_frame()
RuntimeError: An unexpected error occurred while obtaining a CAN iterator: Not finalized?
1.6.1 recorded mdf:
(env) user@vm$ python process_tp_data.py
Finished saving CSV output for devices: ['//LOG/2F6913DB']
(env) user@vm$
Outline the feature request
Support native handling of compressed/encrypted log files from the CANedge within the canedge_browser
module and the mdf_iter
module.
What is the use case?
This would enable native support for all types of CANedge log files, removing the need for using the MDF4 converters as part of Python automation scripts.
Please comment if you'd like to see this feature added as well, or if you have any thoughts on it.
Hello, CSS Team
I've noticed that in the MultiFrame Decoder Class, the 8 bit source address is hardcoded to be 254 (EF).
See L493 @ utils.py :
This can cause some issues as the address of the original sender of the TP broadcast is lost, and can make the use with more than one source difficult.
Is there any way you could change it so the Source Address gets carried over from the BAM message with PGN EC00 to the final "joint" message?
Best regards,
Is it possible to load *.ASC files or are only *.MF4 files supported?
hi Martin
I followed the instructions you mentioned last time, but now I get this error, which I don't understand why it happens, I'm using the downloaded project with the sample files
`Found a total of 2 log files
Traceback (most recent call last):
File "C:\Users\hugoa\OneDrive\Escritorio\api-examples-1.1.0\examples\data-processing\process_data.py", line 31, in
df_raw, device_id = proc.get_raw_data(log_file, passwords=pw)
File "C:\Users\hugoa\OneDrive\Escritorio\api-examples-1.1.0\examples\data-processing\utils.py", line 221, in get_raw_data
device_id = self.get_device_id(mdf_file)
File "C:\Users\hugoa\OneDrive\Escritorio\api-examples-1.1.0\examples\data-processing\utils.py", line 234, in get_device_id
return mdf_file.get_metadata()["HDComment.Device Information.serial number"]["value_raw"]
KeyError: 'HDComment.Device Information.serial number'
Process finished with exit code 1
`
The tutorial for UDS uses an example of retrieving VIN: https://www.csselectronics.com/pages/uds-protocol-tutorial-unified-diagnostic-services
but there's no associated DBC file or example for retrieving this. Since VIN is a more universal of a usecase compared to electric vehicles, it would be nice if a DBC file + associated example was provided for it: https://github.com/CSS-Electronics/api-examples/blob/master/examples/data-processing/process_tp_data.py#L33
Hey there,
im trying to install the CAN-Bus Python API on my Raspberry Pi. The problem is, everytime I try to install the packages from the requirements.txt list, I get an error with the mdf-iter package. It says:
ERROR: Could not find a version that statisfies the requirement mdf_iter>=2.0.10
ERROR: No matching distribution found for mdf_iter>=2.0.10
I also tried installing mdf-iter seperately and older versions, but it doesnt help. I would be very happy if anyone can help.
Thank you in advance !
With kind regards,
Faraz
i tried to install requirements with "pip install -r requirements.txt", first i had problem with numpy version, i changed to latest version then i had problem with wheel "ERROR: Could not build wheels for multidict, pandas, yarl, which is required to install pyproject.toml-based projects" as suggested on internet i upgraded pip.
here is error log file:
errors.txt
how can i fix these errors?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.