GithubHelp home page GithubHelp logo

pycparser's Introduction

pycparser v2.22

image


Introduction

What is pycparser?

pycparser is a parser for the C language, written in pure Python. It is a module designed to be easily integrated into applications that need to parse C source code.

What is it good for?

Anything that needs C code to be parsed. The following are some uses for pycparser, taken from real user reports:

  • C code obfuscator
  • Front-end for various specialized C compilers
  • Static code checker
  • Automatic unit-test discovery
  • Adding specialized extensions to the C language

One of the most popular uses of pycparser is in the cffi library, which uses it to parse the declarations of C functions and types in order to auto-generate FFIs.

pycparser is unique in the sense that it's written in pure Python - a very high level language that's easy to experiment with and tweak. To people familiar with Lex and Yacc, pycparser's code will be simple to understand. It also has no external dependencies (except for a Python interpreter), making it very simple to install and deploy.

Which version of C does pycparser support?

pycparser aims to support the full C99 language (according to the standard ISO/IEC 9899). Some features from C11 are also supported, and patches to support more are welcome.

pycparser supports very few GCC extensions, but it's fairly easy to set things up so that it parses code with a lot of GCC-isms successfully. See the FAQ for more details.

What grammar does pycparser follow?

pycparser very closely follows the C grammar provided in Annex A of the C99 standard (ISO/IEC 9899).

How is pycparser licensed?

BSD license.

Contact details

For reporting problems with pycparser or submitting feature requests, please open an issue, or submit a pull request.

Installing

Prerequisites

  • pycparser was tested with Python 3.8+ on Linux, macOS and Windows.
  • pycparser has no external dependencies. The only non-stdlib library it uses is PLY, which is bundled in pycparser/ply. The current PLY version is 3.10, retrieved from http://www.dabeaz.com/ply/

Note that pycparser (and PLY) uses docstrings for grammar specifications. Python installations that strip docstrings (such as when using the Python -OO option) will fail to instantiate and use pycparser. You can try to work around this problem by making sure the PLY parsing tables are pre-generated in normal mode; this isn't an officially supported/tested mode of operation, though.

Installation process

The recommended way to install pycparser is with pip:

> pip install pycparser

Using

Interaction with the C preprocessor

In order to be compilable, C code must be preprocessed by the C preprocessor -cpp. cpp handles preprocessing directives like #include and #define, removes comments, and performs other minor tasks that prepare the C code for compilation.

For all but the most trivial snippets of C code pycparser, like a C compiler, must receive preprocessed C code in order to function correctly. If you import the top-level parse_file function from the pycparser package, it will interact with cpp for you, as long as it's in your PATH, or you provide a path to it.

Note also that you can use gcc -E or clang -E instead of cpp. See the using_gcc_E_libc.py example for more details. Windows users can download and install a binary build of Clang for Windows from this website.

What about the standard C library headers?

C code almost always #includes various header files from the standard C library, like stdio.h. While (with some effort) pycparser can be made to parse the standard headers from any C compiler, it's much simpler to use the provided "fake" standard includes for C11 in utils/fake_libc_include. These are standard C header files that contain only the bare necessities to allow valid parsing of the files that use them. As a bonus, since they're minimal, it can significantly improve the performance of parsing large C files.

The key point to understand here is that pycparser doesn't really care about the semantics of types. It only needs to know whether some token encountered in the source is a previously defined type. This is essential in order to be able to parse C correctly.

See this blog post for more details.

Note that the fake headers are not included in the pip package nor installed via setup.py (#224).

Basic usage

Take a look at the examples_ directory of the distribution for a few examples of using pycparser. These should be enough to get you started. Please note that most realistic C code samples would require running the C preprocessor before passing the code to pycparser; see the previous sections for more details.

Advanced usage

The public interface of pycparser is well documented with comments in pycparser/c_parser.py. For a detailed overview of the various AST nodes created by the parser, see pycparser/_c_ast.cfg.

There's also a FAQ available here. In any case, you can always drop me an email for help.

Modifying

There are a few points to keep in mind when modifying pycparser:

  • The code for pycparser's AST nodes is automatically generated from a configuration file - _c_ast.cfg, by _ast_gen.py. If you modify the AST configuration, make sure to re-generate the code. This can be done by running the _build_tables.py script from the pycparser directory.
  • Make sure you understand the optimized mode of pycparser - for that you must read the docstring in the constructor of the CParser class. For development you should create the parser without optimizations, so that it will regenerate the Yacc and Lex tables when you change the grammar.

Package contents

Once you unzip the pycparser package, you'll see the following files and directories:

README.rst:

This README file.

LICENSE:

The pycparser license

setup.py:

Installation script

examples/:

A directory with some examples of using pycparser

pycparser/:

The pycparser module source code.

tests/:

Unit tests.

utils/fake_libc_include:

Minimal standard C library include files that should allow to parse any C code. Note that these headers now include C11 code, so they may not work when the preprocessor is configured to an earlier C standard (like -std=c99).

utils/internal/:

Internal utilities for my own use. You probably don't need them.

Contributors

Some people have contributed to pycparser by opening issues on bugs they've found and/or submitting patches. The list of contributors is in the CONTRIBUTORS file in the source distribution. After pycparser moved to Github I stopped updating this list because Github does a much better job at tracking contributions.

pycparser's People

Contributors

akiradeveloper avatar amirgon avatar cod3monk avatar cthesky avatar dubslow avatar e-kwsm avatar eliben avatar eventh avatar graingert avatar hugovk avatar huzecong avatar jdufresne avatar jordr avatar joycebrum avatar kynan avatar ldore avatar manueljacob avatar pinzaghi avatar realitix avatar robbert-harms avatar saullocarvalho avatar scottt avatar sethpoulsen avatar simonlindholm avatar ssfrr avatar stefanor avatar syeberman avatar thomwiggers avatar tysonandre avatar vit9696 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pycparser's Issues

Walk through AST

May be I need to get a close look at your project, but I didn't find how to walk through generated AST.
I mean if there any way to do something like that:

ast = parser.parse(text, filename='<none>')
for i in run_through(ast):
    if i.name == "If":
        print("Match!")

Feature: built-in ply based preprocessor?

A friend happened to come upon this code in the ply distribution which implements an ANSI-C compatible preprocessor.

Would that be a suitable replacement for using cpp.exe? Pycparser could support it transparently.

Test failure with clang 5.1 on OS X 10.9

I encountered a very strange error when running the test suite on OS X with clang.

Generating LALR tables
WARNING: 1 shift/reduce conflict
........................In file included from tests/c_files/memmgr.c:8:
tests/c_files/memmgr.h:37:7: warning: missing terminating ' character
      [-Winvalid-pp-token]
// you'll probably want to keep those undefined, because
      ^
tests/c_files/memmgr.h:96:8: warning: extra tokens at end of #endif directive
      [-Wextra-tokens]
#endif // MEMMGR_H
       ^
       //
tests/c_files/memmgr.c:97:56: warning: missing terminating ' character
      [-Winvalid-pp-token]
    // that if nbytes is a multiple of nquantas, we don't allocate too much
                                                       ^
tests/c_files/memmgr.c:119:28: warning: missing terminating ' character
      [-Winvalid-pp-token]
                // its prev's next to its next
                           ^
4 warnings generated.
E.....................................................
======================================================================
ERROR: test_with_cpp (test_general.TestParsing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/daniel/Downloads/pycparser-release_v2.10/tests/test_general.py", line 34, in test_with_cpp
    cpp_args='-I%s' % c_files_path)
  File "./pycparser/__init__.py", line 93, in parse_file
    return parser.parse(text, filename)
  File "./pycparser/c_parser.py", line 138, in parse
    debug=debuglevel)
  File "./pycparser/ply/yacc.py", line 265, in parse
    return self.parseopt_notrack(input,lexer,debug,tracking,tokenfunc)
  File "./pycparser/ply/yacc.py", line 1047, in parseopt_notrack
    tok = self.errorfunc(errtoken)
  File "./pycparser/c_parser.py", line 1613, in p_error
    column=self.clex.find_tok_column(p)))
  File "./pycparser/plyparser.py", line 54, in _parse_error
    raise ParseError("%s: %s" % (coord, msg))
ParseError: tests/c_files/memmgr.c:1:1: before: /

----------------------------------------------------------------------
Ran 78 tests in 5.642s

FAILED (errors=1)

It took a while to figure out, but the problem seems to be that cpp, which calls clang, leaves all // comments in the output, though it does remove /* */ comments. Even weirder, calling clang -E directly does the expected thing and removes all comments! I'd say this is a bug in clang, but in any case pycparser malfunctions with clang. If I change preprocess_file() to always call clang -E instead of cpp the tests all pass. It also works with cc -E and gcc -E, which are symlinks to clang.

I'm not sure what the best solution is but thought you should be aware of it.

Node to writing out macro

Hi,

I am writing a program with this library that converts inline function within a file to an equivalent macro. For example,

inline void fun(int y)
{
  x = y;
}

to

#define fun(y) ¥
do { ¥
  x = y;
} while (0)

I have done most of the works including macro hygiene (renames identifiers so they don't conflict). But confronts a problem in writing out the generated macro that is represented as simple string.

My idea is to first search for all the inline functions within the preprocessed file and rewrite them to the macros in AST-level which means popping out the old AST node that represents the function and inserting a node that will write out the macro into the location.

I know parsing a macro is really difficult and that's the reason you don't support it but what about writing out? I think a AST node that includes a string and simply writes it out when visited by generator will suffice.

I am searching for such node class but I couldn't find it yet. Do you have it? and If not, may I implement it? You don't like this idea? The node will only be located just under FileAST so it won't confuse the syntax.

CGenerator.visit_ExprList curly braces instead of parentheses for sub-expression-lists

In c_generator.py, CGenerator.visit_ExprList prints sub-expression-lists enclosed in curly-braces instead of round parentheses. The C spec doesn't allow this afaik and neither does the grammar defined in _c_ast.cfg.

Example:

from pycparser import c_ast
from pycparser import c_generator
gen = c_generator.CGenerator()
e1 = c_ast.Assignment('=', c_ast.ID('a'), c_ast.ID('b'))
e2 = c_ast.Assignment('=', c_ast.ID('b'), c_ast.ID('c'))
e3 = c_ast.Assignment('=', c_ast.ID('c'), c_ast.ID('a'))
el1 = c_ast.ExprList([e2, e3])
el = c_ast.ExprList([e1, el1])
gen.visit(el)
'a = b, {b = c, c = a}'

Should be:

'a = b, (b = c, c = a)' # <-- '(' and ')' instead of '{' and '}'

This causes such expressions generated by CGenerator to no longer be parsed by CParser (as mentioned the _c_ast.cfg file doesn't consider compound statements as expressions).

Workaround for users (and potential fix):

class FixedCGenerator(c_generator.CGenerator):

    def init(self):
        super(FixedCGenerator, self).init()

    def visit_ExprList(self, n):
        visited_subexprs = []
        for expr in n.exprs:
            if isinstance(expr, c_ast.ExprList):
                visited_subexprs.append('(' + self.visit(expr) + ')')
            else:
                visited_subexprs.append(self.visit(expr))
        return ', '.join(visited_subexprs)

// wayrick

P.S. I really like this module (much easier to use than clang :) )

parsing "const char* const*" generates "const char ** const"

Heyho,

I use pycparser to automatically implement a given inteface with stub functions. One function takes a const char* const* argument and after parsing this declaration, appending the body and generating it with defaults, I get a const char ** const argument in my implementation. This leads to a "warning: assignment from incompatible pointer type" from gcc.

I have two cases leading to this warning:
parsed -> generated
const char* const* arg -> const char ** const arg
struct bla* const* arg -> struct bla ** const arg

--schachmat

p_unified_wstring_literal crashes with builtin function does not have __getitem__

Hi Eli,
I love this library, thanks so much. I ran into an issue when trying to parse some code though. In c_parser.py, p_unified_wstring_literal has this line:
p[1].value = p[1].value.rstrip[:-1] + p[2][1:]

which crashed for me because the builtin function rstrip cannot be sliced. Is it possible you meant this:
p[1].value = p[1].value.rstrip()[:-1] + p[2][1:]

Or even perhaps p[2][2:]? I believe a test case for reproing this would be something like the following:
L"hi" L"there";

Thanks again.

Add sys/socket.h support to fakelib

As said in the title, it'd be very useful to get sys/socket.h support to fakelib.

Is there any way I can overcome this issue in the meantime?

Introduce macro-of-inline as an application of pycparser library

Do you want an application of pycparser library more than tiny examples? Here it is. Let's me introduce my new application "macro-of-inline".

https://github.com/akiradeveloper/macro-of-inline

macro-of-inilne is a kind of preprocessor that translates inline functions to equivalent macros. Some bad compilers doesn't support function inlining and that's where my application does great.

Thanks to pycparser, macro-of-inline is only a few hundreds of LOC and thus easy to understand. If you have some interests on my project feel free to contact me. Also, fixing and refactoring are all welcome.

Cheers,

Akira

Parse not work with FILE pointer

Hello!
I'm using pycparser to generate the symbol tree in an IDE. The problem is when used the FILE pointer. For example:

code = """
FILE * fp;
void foo( void ) {
return;
}
"""

parser = c_parser.CParser()
ast = parser.parse(code)

traceback

ParseError: :2:6: before: *

Implement implicit typedefs

Hi,

Instead of having a fake libc with a whole bunch of typedefs why not implement a catchall rule that just typedefs an unknown storage type to an int or something else that would indicate to the user that the storage type was unresolvable?

This would increase portability as a lot of OSes have really weird storage types in their libc implementations like Darwin, for example. It would also get rid of the fake_libc requirement completely.

Windows: utils/* not being installed?

I noticed that cpp.exe is not installed with pycparser. Is that a bug or a deliberate choice? It would be enormously helpful to be able to depend on cpp.exe when pycparser is installed as in e.g.

CPPPATH = os.path.join(os.path.abspath(os.path.dirname(pycparser.__file__)),'utils', 'cpp.exe')

Statement expressions

The GNU compiler allows statements and declarations in expressions (see http://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html). The ARM compiler supports these as well (see http://infocenter.arm.com/help/topic/com.arm.doc.faqs/ka14717.html). The pycparser currently can not deal with such expression (it report an error "before: {").
Although I guess this might be out of the current scope of the project, the fact that the two major compilers in the embedded world have this feature, makes a strong case for supporting such expressions also in pycparser, I believe. Therefore, I would like to ask for such a consideration.

Different ASTs from codes the same in meaning

Hi,

The AST for t1 and t2 produces different ASTs which really bugs me now. My code assumes the parent of a Return node is always Compound. However, in t1, it is not.

Should we fix this? or is this your intention?

If this is your intention, I will write workaround to rewrite all if, while and for as to form link in t2. Please tell me if caring only these three is not sufficient.

rom pycparser import c_parser

t1 = r"""
int f()
{
        if (0)
                return 0;

        while (0)
                return 0;

        for (;;)
                return 0;
}
"""

t2 = r"""
int f()
{
        if (0) {
                return 0;
        }

        while (0) {
                return 0;
        }

        for (;;) {
                return 0;
        }
}
"""

def parse(t):
        parser = c_parser.CParser()
        parser.parse(t).show()

parse(t1)
parse(t2)

OUTPUT:

FileAST:
  FuncDef:
    Decl: f, [], [], []
      FuncDecl:
        TypeDecl: f, []
          IdentifierType: ['int']
    Compound:
      If:
        Constant: int, 0
        Return:
          Constant: int, 0
      While:
        Constant: int, 0
        Return:
          Constant: int, 0
      For:
        Return:
          Constant: int, 0
FileAST:
  FuncDef:
    Decl: f, [], [], []
      FuncDecl:
        TypeDecl: f, []
          IdentifierType: ['int']
    Compound:
      If:
        Constant: int, 0
        Compound:
          Return:
            Constant: int, 0
      While:
        Constant: int, 0
        Compound:
          Return:
            Constant: int, 0
      For:
        Compound:
          Return:
            Constant: int, 0

Dump AST of a C file as a JSON string

Thanks Eli for writing this module. Wonder if it's possible to dump ast of a C file in a JSON form?
It would be easier to apply some customized instrument.

Necessity of PLY's table files

In the past I've found it annoying that PLY creates lex/yacc table files from wherever you happen to run the script. The same is happening with pycparser, so I investigated if these tables actually speed up execution at all with this particular grammar.

I timed the execution of examples\cdecl.py over 50 runs, and found that removing the tables (Syeberman@c218762) actually made execution faster: 44.77 seconds over the 50 runs without tables versus 47.44 with (with tables already written out). (Win7 Intel Core i7-2600 @ 3.4GHz)

It seems David Beazley himself has also questioned the necessity of these tables, as recently as a year ago:
http://comments.gmane.org/gmane.comp.python.ply/636

The only benefit I see for these tables is to allow Python's optimized mode to be used:
http://www.dabeaz.com/ply/ply.html#ply_nn38
I tested with -O (after cleaning pycache) and it works just fine, but -OO does indeed fail. More investigation would be needed if -OO support was necessary (perhaps there's a way to keep docstrings for certain modules).

question: macro supported?

Hi, I looked through examples and tests but I could't find parsing and generating C macros.

Is it possible to read/write macros with this program?

An example of macro

#define f(x) do {} while (0)

Why do we need _fake_defines.h?

pycparser needs the input file preprocessed. We may use fake include to parse codes for arbitrary compiler. This is what you describe in https://github.com/eliben/pycparser#what-about-the-standard-c-library-headers

However, one question arises: _fake_typedefs.h is truly needed because without this parsing often fail but what about _fake_defines.h? I don't think we need this. Because, pycparser can't be parse only if definition like typedef or struct is not found, which is listed in _fake_typedef.h.

Actually, the code below doesn't fail although N is not found. Furthermore, the syntactically incorrect recursive call f(1) is parsed.

def p(t):
        parser = c_parser.CParser()
        parser.parse(t).show()

t1 = r"""
void f()
{
  int x = N;
  int x = true;
  f(1);
}
"""
p(t1)

_fake_defines.h is not only unnecessary but has downside: As a result of preprocessing (or text replacing) with that fake defines, the preprocess-ed file loses all information of the past (or before preprocessing). This means, AST transformation or even analysis on the code differs from what is expected on the past code. In other word, a file and the preprocessed are completely irrelevant.

As a conclusion, I propose deleting _fake_defines.h.

No error message on pointer multiplication

Hello,
I'm evaluating some C Parser librarys and I have a question:
Pycparser doesnt report errors on pointer mulitplication. Are those extended error checking features planned?

int main(void)
{
int _a, *b;
a=a_b;
}

pycparser increases memory usage of pyOpenSSL by an order of magnitue

I have opened pyca/pyopenssl#202 which shows the rewrite of pyOpenSSL from C bindings to it's current usage of "cryptography" (the package) between 0.13 & 0.14 has increased memory usage by an order of magnitude from ~2mb to ~20mb

The issue was debugged from issues with openstack ci jobs running out of memory in [1]

As described there, heapy showed the top heap usage as

Partition of a set of 205366 objects. Total size = 30969040 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  67041  33  5712560  18   5712560  18 str
     1  10260   5  2872800   9   8585360  28 dict of pycparser.plyparser.Coord
     2  27765  14  2367552   8  10952912  35 tuple
     3   1215   1  2246760   7  13199672  43 dict (no owner)
     4   1882   1  1972336   6  15172008  49 dict of pycparser.c_ast.Decl
     5  16085   8  1736232   6  16908240  55 list
     6    360   0  1135296   4  18043536  58 dict of module
     7   4041   2  1131480   4  19175016  62 dict of pycparser.c_ast.TypeDecl
     8   4021   2  1125880   4  20300896  66 dict of pycparser.c_ast.IdentifierType
     9   6984   3   893952   3  21194848  68 types.CodeType
<413 more rows. Type e.g. '_.more' to view.>

which seems to indicate pycparser as pretty uch the top consumer. Of course there are many layers here -- pyOpenSSL -> cryptography -> cffi -> pycparser

I am not sure if something is tickling a bug in pycparser and causing leakage, or if this is just expected usage. The increase is rather dramatic however.

I understand this isn't a great bug report. I haven't a regression point where things suddenly started using more memory and it may be something one of the upper layers is doing wrong.

[1] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job

Type modifier chains: better support in generator?

I looked at how pycparserext patches the parser to support new type attributes and ended up rolling my own variant that just used a new AttributeDecl as a type modifier.

On the parser side it seems adding new modifiers is pretty straight forward (just support 'type' attribute for chaining), but on the generator side, things get a bit more ugly. I have to override _generate_type completely to add my new modifier; it requires adding a new recursive call that recursively appends a node to the modifiers, and a new entry to the string generator if-blocks that does something to nstr depending on my new modifier.

I would rather use the _generate_type that's there, and instead expand the class with new functions that don't overwrite old functionality, but that requires a few modifications.

why this #pragma generates error?

Why the following #pragma:

#pragma ghs section somestring="some_other_string"

generates error:

 AssertionError: invalid #pragma directive

bug: interpret comma operator incorrectly

Hi,

Though we expect the input code is regenerated in the same way but it isn't actually. And worse, output code can't be compiled. What is the main cause and how can we fix this?

Comma operator is out of support of pycparser?

from pycparser import c_parser, c_generator

import os

t1 = r"""
int f(int x) { return x; }
int main()
{
  int x = f((1,2));
  return 0;
}
"""

parser = c_parser.CParser()
ast = parser.parse(t1)

# FileAST:
#   FuncDef:
#     Decl: f, [], [], []
#       FuncDecl:
#         ParamList:
#           Decl: x, [], [], []
#             TypeDecl: x, []
#               IdentifierType: ['int']
#         TypeDecl: f, []
#           IdentifierType: ['int']
#     Compound:
#       Return:
#         ID: x
#   FuncDef:
#     Decl: main, [], [], []
#       FuncDecl:
#         TypeDecl: main, []
#           IdentifierType: ['int']
#     Compound:
#       Decl: x, [], [], []
#         TypeDecl: x, []
#           IdentifierType: ['int']
#         FuncCall:
#           ID: f
#           ExprList:
#             ExprList:
#               Constant: int, 1
#               Constant: int, 2
#       Return:
#         Constant: int, 0
ast.show()

generator = c_generator.CGenerator()

# int f(int x)
# {
#   return x;
# }
#
# int main()
# {
#   int x = f({1, 2});
#   return 0;
# }
output = generator.visit(ast)
print(output)

# <stdin>: In function 'main':
# <stdin>:8:13: error: expected expression before '{' token
os.system("echo \"%s\" | gcc -xc -" % output)

version 2.11 incompatible with cryptography 0.8.2

Our build started failing just after the release, here are the relevant lines from the log:

14:49:17 Collecting cryptography>=0.7 (from pyOpenSSL->pysaml2==2.4.0->-r requirements.txt (line 32))
14:49:17 Using cached cryptography-0.8.2.tar.gz

14:49:18 Collecting pycparser (from cffi>=0.8->cryptography>=0.7->pyOpenSSL->pysaml2==2.4.0->-r requirements.txt (line 32))
14:49:18 Downloading pycparser-2.11.tar.gz (297kB)

14:49:43 Running setup.py install for cryptography
14:49:43 Complete output from command /mnt_ram/jenkins_jobs/Apps-deploy/jenkins-Apps-deploy-810/bin/python2.7 -c "import setuptools, tokenize;file='/tmp/pip-build-dNQhaD/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /tmp/pip-tfy0go-record/install-record.txt --single-version-externally-managed --compile --install-headers /mnt_ram/jenkins_jobs/Apps-deploy/jenkins-Apps-deploy-810/include/site/python2.7/cryptography:
14:49:43 running install
14:49:43 Traceback (most recent call last):
14:49:43 File "", line 1, in
14:49:43 File "/tmp/pip-build-dNQhaD/cryptography/setup.py", line 342, in
14:49:43 **keywords_with_side_effects(sys.argv)
14:49:43 File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
14:49:43 dist.run_commands()
14:49:43 File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
14:49:43 self.run_command(cmd)
14:49:43 File "/usr/lib/python2.7/distutils/dist.py", line 971, in run_command
14:49:43 cmd_obj.ensure_finalized()
14:49:43 File "/usr/lib/python2.7/distutils/cmd.py", line 109, in ensure_finalized
14:49:43 self.finalize_options()
14:49:43 File "/tmp/pip-build-dNQhaD/cryptography/setup.py", line 119, in finalize_options
14:49:43 self.distribution.ext_modules = get_ext_modules()
14:49:43 File "/tmp/pip-build-dNQhaD/cryptography/setup.py", line 78, in get_ext_modules
14:49:43 from cryptography.hazmat.bindings.commoncrypto.binding import (
14:49:43 File "/tmp/pip-build-dNQhaD/cryptography/src/cryptography/hazmat/bindings/commoncrypto/binding.py", line 14, in
14:49:43 class Binding(object):
14:49:43 File "/tmp/pip-build-dNQhaD/cryptography/src/cryptography/hazmat/bindings/commoncrypto/binding.py", line 36, in Binding
14:49:43 "-framework", "Security", "-framework", "CoreFoundation"
14:49:43 File "/tmp/pip-build-dNQhaD/cryptography/src/cryptography/hazmat/bindings/utils.py", line 97, in build_ffi_for_binding
14:49:43 extra_link_args=extra_link_args,
14:49:43 File "/tmp/pip-build-dNQhaD/cryptography/src/cryptography/hazmat/bindings/utils.py", line 106, in build_ffi
14:49:43 ffi.cdef(cdef_source)
14:49:43 File "/mnt_ram/jenkins_jobs/Apps-deploy/jenkins-Apps-deploy-810/local/lib/python2.7/site-packages/cffi/api.py", line 106, in cdef
14:49:43 self._parser.parse(csource, override=override, packed=packed)
14:49:43 File "/mnt_ram/jenkins_jobs/Apps-deploy/jenkins-Apps-deploy-810/local/lib/python2.7/site-packages/cffi/cparser.py", line 165, in parse
14:49:43 self._internal_parse(csource)
14:49:43 File "/mnt_ram/jenkins_jobs/Apps-deploy/jenkins-Apps-deploy-810/local/lib/python2.7/site-packages/cffi/cparser.py", line 199, in _internal_parse
14:49:43 realtype = self._get_type(decl.type, name=decl.name)
14:49:43 File "/mnt_ram/jenkins_jobs/Apps-deploy/jenkins-Apps-deploy-810/local/lib/python2.7/site-packages/cffi/cparser.py", line 360, in _get_type
14:49:43 return self._get_struct_union_enum_type('struct', type, name)
14:49:43 File "/mnt_ram/jenkins_jobs/Apps-deploy/jenkins-Apps-deploy-810/local/lib/python2.7/site-packages/cffi/cparser.py", line 434, in _get_struct_union_enum_type
14:49:43 return self._structnode2type[type]
14:49:43 File "/usr/lib/python2.7/weakref.py", line 315, in getitem
14:49:43 return self.data[ref(key)]
14:49:43 TypeError: cannot create weak reference to 'Struct' object

Fedora patch, order of entries in system paths for unit testing

We needed this patch to package PyCParser for Fedora; you might to include it:

HG changeset patch

User Scott Tsai [email protected]

Date 1358446261 -28800

Node ID 12aa73c5da595a08f587c14a74e84bf72f0bf7a0

Parent a46039840b0ed8466bebcddae9d4f1df60d3bc98

tests/all_tests.py: add local paths to the front of sys.path

While doing pycparser development on a machine that already has an
older version of pycparser installed, we want unit tests to run against
the local copy instead of the system wide copy of pycparser.
This patch adds '.' and '..' to the front of sys.path instead of the back.

diff --git a/tests/all_tests.py b/tests/all_tests.py
--- a/tests/all_tests.py
+++ b/tests/all_tests.py
@@ -1,7 +1,7 @@
#!/usr/bin/env python

import sys
-sys.path.extend(['.', '..'])
+sys.path[0:0] = ['.', '..']

import unittest

String literals with many escapes take a long time to tokenize

Running the following program on the following data takes a long time to parse.

if __name__ == '__main__':
    ast = pycparser.parse_file("x.c")
    ast.show()
int main(int argc, char** argv) {
    "\123\123\123\123\123\123\123\123\123\123\123\123\123\123\123";
}

I think the problem is that the BAD_STRING_LITERAL regular expression is doing backtracking, resulting in exponential running time. Add another '\123' to the string and it will take roughly 3 times as long.

If I break the BAD_STRING_LITERAL pattern, by requiring that it start with some unexpected string 'xyzzy and then some' so that it never starts to match, the program runs quickly.

is pycparser thread-safe ?

I would like to use pycparser with the new lib concurrent.futures, from python 3.2.

Is-it safe ?
Some requirements to avoid problems ?

Thanks for your help

question: What is the best way to rewrite all return node to goto node?

Regarding to my macro-of-inline project, I am trying to rewrite all return to goto label. For example,

void f()
{
  if (1) { return; }
label:
  ;
}

will be

void f()
{
  if (1) { goto label; }
label:
  ;
}

I think this is impossible with NodeVisitor with visit_Return because what is passed when it visits Return node is a reference to the Return node so we can't change it to Goto node but only can change the value of the Return node.

In pycparser, what is the best way to do this? In python, is it possible to overwrite class of a object?

Make the use of __slots__ optional

The recent move to use __slots__ in c_ast.py has broken a fair bit of my code. I pickle ASTs with arbitrary extended attributes, and both of these features now fail. Yes, I can work around them. However, in my opinion, the power of pycparser over something faster and more complete like libclang is the flexibility of Python, which __slots__ undermines. (If speed is required, pypy executes pycparser about 3 times faster than CPython in my testing.) I think __slots__ should be optional, if possible.

I forked and attempted to do this but this is hard to do in a way which has minimal impact on your code. I'd be willing to help, but I wanted to speak to you before working on a massive PR which you would understandably reject.

Currently my solution is a nasty monkey patch which I run before using pycparser which iterates through all the classes in c_ast.py, replacing them with clone classes with the same __dict__ as the original but with the __slots__ and member attributes removed to restore original behaviour.

Link to Reddit discussion for reference - http://www.reddit.com/r/Python/comments/38alfw/trying_to_define_a_class_which_can_optionally_use/

pycparser accepts parseable but nonsensical declaration

void (*fptrs)(void)[4];

While this successfully parses, you can't have an array of void. It might be worth considering special casing this as invalid, because any C frontend would reject it, and it means that anything using pycparser that doesn't check for it will return funny results. pycparser's cdecl implementation, for example, accepts it without warnings:

fp is a pointer to function(void) returning array[4] of void

Compare to original cdecl:

cdecl> explain void (*fptrs)(void)[4];
Warning: Unsupported in C -- 'array of type void'
        (maybe you mean "array of type pointer to void")
declare fptrs as pointer to function (void) returning array 4 of void

Not possible to serialize the AST

pickle.dump can't handle ast because the ast class uses __slots__ but doesn't implement __getstate__.

It'd be useful to add that functionality. Use case: parse a large repository once and store the ast for further processing. This way the source tree is parsed only once, saving time.

Parsing an empty struct causes an error

Hello,

I'm using pycparser to parse a C file that is automatically generated from some other tool. Pycparser generates an parseError when it parses an empty struct. The code to be parsed is as follows:

============= a.c ================
struct Foo {

};

The python file I ran was the following:
=========== parse.py =============

!/usr/bin/python

from pycparser import parse_file

ast = parse_file(r'a.c', use_cpp=True)

The error trace is the following:

Traceback (most recent call last):
File "./parse.py", line 3, in
ast = parse_file(r'a.c', use_cpp=True)
File "/Users/zilong/pycparser/pycparser/init.py", line 93, in parse_file
return parser.parse(text, filename)
File "/Users/zilong/pycparser/pycparser/c_parser.py", line 138, in parse
debug=debuglevel)
File "/Users/zilong/pycparser/pycparser/ply/yacc.py", line 265, in parse
return self.parseopt_notrack(input,lexer,debug,tracking,tokenfunc)
File "/Users/zilong/pycparser/pycparser/ply/yacc.py", line 1047, in parseopt_notrack
tok = self.errorfunc(errtoken)
File "/Users/zilong/pycparser/pycparser/c_parser.py", line 1631, in p_error
column=self.clex.find_tok_column(p)))
File "/Users/zilong/pycparser/pycparser/plyparser.py", line 54, in _parse_error
raise ParseError("%s: %s" % (coord, msg))

pycparser.plyparser.ParseError: a.c:2:1: before: }

It would be great if you could point out to me how to fix it. Thanks a lot.

Slow parse using python ConfigParser and requests

I wrote a simple script in Python that read a config file (a ini format) and then using request library to make an HTTP request to my server.

When i'm using it on my Raspberry Pi i noticed a strange behavior: when i'm using ConfigParser and request libraries together Python have a strange delay into interpreting my script. If i comment the line that require "requests" all works correctly.

So i created a simple test file to debug this:

$ cat test2.py
import socket
from socket import error as socket_error

import requests

import time

from ConfigParser import SafeConfigParser

Linux time command on this code:

$ time python test2.py
real    0m21.763s
user    0m21.260s
sys     0m0.300s

If i will comment out "requests" require, this is the output:

$ time python test2.py
real    0m0.373s
user    0m0.300s
sys     0m0.070s

Library version:

$ pip show requests

---
Metadata-Version: 2.0
Name: requests
Version: 2.7.0
Summary: Python HTTP for Humans.
Home-page: http://python-requests.org
Author: Kenneth Reitz
Author-email: [email protected]
License: Apache 2.0
Location: /usr/local/lib/python2.7/dist-packages
Requires:

$ pip show ConfigParser

---
Metadata-Version: 1.1
Name: configparser
Version: 3.3.0.post2
Summary: This library brings the updated configparser from Python 3.2+ to Python 2.6-2.7.
Home-page: http://docs.python.org/3/library/configparser.html
Author: Łukasz Langa
Author-email: [email protected]
License: MIT
Location: /usr/local/lib/python2.7/dist-packages
Requires:

$ pip show pycparser

---
Metadata-Version: 1.1
Name: pycparser
Version: 2.12
Summary: C parser in Python
Home-page: https://github.com/eliben/pycparser
Author: Eli Bendersky
Author-email: [email protected]
License: BSD
Location: /usr/local/lib/python2.7/dist-packages
Requires:

$ python --version
Python 2.7.3

This is the output of "python -v test2.py"

[...]
# /usr/lib/python2.7/pickle.pyc matches /usr/lib/python2.7/pickle.py
import pickle # precompiled from /usr/lib/python2.7/pickle.pyc
import marshal # builtin
# /usr/local/lib/python2.7/dist-packages/pycparser/c_parser.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/c_parser.py
import pycparser.c_parser # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/c_parser.pyc
import pycparser.ply # directory /usr/local/lib/python2.7/dist-packages/pycparser/ply
# /usr/local/lib/python2.7/dist-packages/pycparser/ply/__init__.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/ply/__init__.py
import pycparser.ply # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/ply/__init__.pyc
# /usr/local/lib/python2.7/dist-packages/pycparser/ply/yacc.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/ply/yacc.py
import pycparser.ply.yacc # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/ply/yacc.pyc
# /usr/local/lib/python2.7/dist-packages/pycparser/c_ast.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/c_ast.py
import pycparser.c_ast # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/c_ast.pyc
# /usr/local/lib/python2.7/dist-packages/pycparser/c_lexer.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/c_lexer.py
import pycparser.c_lexer # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/c_lexer.pyc
# /usr/local/lib/python2.7/dist-packages/pycparser/ply/lex.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/ply/lex.py
import pycparser.ply.lex # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/ply/lex.pyc
# /usr/lib/python2.7/copy.pyc matches /usr/lib/python2.7/copy.py
import copy # precompiled from /usr/lib/python2.7/copy.pyc
# /usr/local/lib/python2.7/dist-packages/pycparser/plyparser.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/plyparser.py
import pycparser.plyparser # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/plyparser.pyc
# /usr/local/lib/python2.7/dist-packages/pycparser/ast_transforms.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/ast_transforms.py
import pycparser.ast_transforms # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/ast_transforms.pyc
dlopen("/usr/local/lib/python2.7/dist-packages/_cffi_backend.so", 2);
import _cffi_backend # dynamically loaded from /usr/local/lib/python2.7/dist-packages/_cffi_backend.so
# /usr/local/lib/python2.7/dist-packages/pycparser/lextab.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/lextab.py
import pycparser.lextab # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/lextab.pyc
# /usr/lib/python2.7/encodings/latin_1.pyc matches /usr/lib/python2.7/encodings/latin_1.py
import encodings.latin_1 # precompiled from /usr/lib/python2.7/encodings/latin_1.pyc
# /usr/local/lib/python2.7/dist-packages/pycparser/yacctab.pyc matches /usr/local/lib/python2.7/dist-packages/pycparser/yacctab.py
import pycparser.yacctab # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/yacctab.pyc
[...]

Python interpreter hang some second to last lines of previuos output

import pycparser.yacctab # precompiled from /usr/local/lib/python2.7/dist-packages/pycparser/yacctab.pyc

Support for Weakref in __slots__

Hello,
I see you added support for the slots mechanism in the ast. Nice. However, this breaks cffi downstream, since they use weak references to Enum, etc objects (see their cparser.py file). In turn this breaks petlib which I maintain.
I think the solution is to include the "weakref" in the "slots" list of fields, as per the advice at:
https://docs.python.org/2/reference/datamodel.html#slots
Many thanks,
George

Syntax Errors reported by pycparser but not gcc

The following code generates no warnings or errors when compiled with gcc with a large number of warning options, yet pycparser reports a syntax error. #includes omitted.

int main()
{
int i = 5, j = 6, k = 1;
if ((i=j && k == 1) || k > j)
printf("Hello, World\n");
return 0;
}

Coord bug on FOR loops

Hi Eli,

if you run pycparser (2.1, Python 2.7, Windows) over a loop such as:

for(int z=0; z<4; z++){
    // whatever here
}

you can see that there is a coord missing on the 'int z=0' (which is a DeclList) part:

>>> ast.show(showcoord=True)
DeclList:  (at None)
  Decl: z, [], [], [] (at /some/file.c:15)
    TypeDecl: z, [] (at /some/file.c:15)
      IdentifierType: ['int'] (at /some/file.c:15)
    Constant: int, 0 (at /some/file.c:15)

Is it a bug? Or a feature ? (I am not very familiar with coord detecting yet, but maybe there can be a reason of some sort, e.g. "no one can guarantee that the DeclList will not span over several lines", or even "declaring an int after the very beginning of the function is not supposed to be supported by the grammar chosen for pycparser anyway")
But in case it was not intended, I think this could be fixed in c_parser.py with:

    def p_iteration_statement_4(self, p):
        """ iteration_statement : FOR LPAREN declaration expression_opt SEMI expression_opt RPAREN statement """
#        p[0] = c_ast.For(c_ast.DeclList(p[3]), p[4], p[6], p[8], self._coord(p.lineno(1)))
        p[0] = c_ast.For(c_ast.DeclList(p[3], self._coord(p.lineno(1))), p[4], p[6], p[8], self._coord(p.lineno(1)))    # fix

Tell me what you think about this.
Anyway, thanks for your valuable work on this parser :D

Fedora patch, order of entries in system paths for _build_tables.py

Basically the idea as issue 11; if there is already a pycparser package installed on the system, _build_tables.py fails because of the path order.

diff -up pycparser-release_v2.09.1/pycparser/_build_tables.py.tables-sys-path p
ycparser-release_v2.09.1/pycparser/_build_tables.py
--- pycparser-release_v2.09.1/pycparser/_build_tables.py.tables-sys-path
2013-07-22 13:17:44.950531002 -0600
+++ pycparser-release_v2.09.1/pycparser/_build_tables.py 2013-07-22 13:1
8:29.188526142 -0600
@@ -17,7 +17,7 @@ ast_gen = ASTCodeGenerator('_c_ast.cfg')
ast_gen.generate(open('c_ast.py', 'w'))

import sys
-sys.path.extend(['.', '..'])
+sys.path[0:0] = ['.', '..']
from pycparser import c_parser

Generates the tables

C11 support?

Are there any plans for pycparser to support C11, most particularly _Generic, _Static_assert, and _Noreturn?

Handling error message from cpp "header toto.h not found".

Problem :

  • pycparser analyse a c source file,
  • the c source includes a header file, which doesn't exist.

cpp print a message like "header toto.h not found", on the console.
But pycparser doesn't handle this message, and the analysis is done anyway.

Proposition :

  • handle this kind of messages (read from stderr, when cpp is launch),
  • create an exception to stop the pycparser analysis, and avoid a false analysis

Thanks for you help

Parsing an empty struct causes an error

Hello,

I'm using pycparser to parse a C file that is automatically generated from some other tool. Pycparser generates an parseError when it parses an empty struct. The code to be parsed is as follows:

/* a.c */
struct Foo {
};

The python file (parse.py) I ran was the following:

#!/usr/bin/python
from pycparser import parse_file
ast = parse_file(r'a.c', use_cpp=True)

The error trace is the following:

Traceback (most recent call last):
File "./parse.py", line 3, in 
ast = parse_file(r'a.c', use_cpp=True)
File "/Users/zilong/pycparser/pycparser/init.py", line 93, in parse_file
return parser.parse(text, filename)
File "/Users/zilong/pycparser/pycparser/c_parser.py", line 138, in parse
debug=debuglevel)
File "/Users/zilong/pycparser/pycparser/ply/yacc.py", line 265, in parse
return self.parseopt_notrack(input,lexer,debug,tracking,tokenfunc)
File "/Users/zilong/pycparser/pycparser/ply/yacc.py", line 1047, in parseopt_notrack
tok = self.errorfunc(errtoken)
File "/Users/zilong/pycparser/pycparser/c_parser.py", line 1631, in p_error
column=self.clex.find_tok_column(p)))
File "/Users/zilong/pycparser/pycparser/plyparser.py", line 54, in _parse_error
raise ParseError("%s: %s" % (coord, msg))
pycparser.plyparser.ParseError: a.c:2:1: before: }

It would be great if you could point out to me how to fix it. Thanks a lot.

Parse error when parsing function declaration

When parsing a function declaration like
void f(double a[restrict][5]);
pycparser emits a parsing error. The above example was copied from section 6.7.5.3 (function declarators) of the C standard. I guess the error is due to pycparser not recognizing the restrict keyword in the array declaration.

[BUG] Comma operator: parenthesis discarded

Hi,

I think the direct c-to-c translation below is a bug. It discarded the parenthesis around (c=10, 0) though it's necessary to be a comma operator.

from pycparser import c_parser, c_generator

t = r"""
int main()
{
        r = 0 ? 0 : (c = 10, 0);
}
"""

parser = c_parser.CParser()
ast = parser.parse(t)

ast.show()
# FileAST:
#   FuncDef:
#     Decl: main, [], [], []
#       FuncDecl:
#         TypeDecl: main, []
#           IdentifierType: ['int']
#     Compound:
#       Assignment: =
#         ID: r
#         TernaryOp:
#           Constant: int, 0
#           Constant: int, 0
#           ExprList:
#             Assignment: =
#               ID: c
#               Constant: int, 10
#             Constant: int, 0

print c_generator.CGenerator().visit(ast)
# int main()
# {
#   r = 0 ? 0 : c = 10, 0;
# }

pycparser.plyparser.ParseError: /usr/lib/gcc/x86_64-redhat-linux/4.8.2/include/stdarg.h:40:27: before: __gnuc_va_list

I tryed to parse the simple program:

#include <stdio.h>

#define MY_AGO 21.5

/*
 * Главная функция программы.
 *
 * На входе: argc - количество параметров командной строки
 *           argv - аргументы командрной строки
 *
 * На выходе: 0 - успешное завершение программы
 */
int main(int argc, char **argv) {
    printf("Hello, World!\n");
    printf("I am %d ago.\n", MY_AGO);
    printf("You can use \"//\" for one-line comments\n");
    return 0;
}

I used C preprocessor cpp before pycparser

cpp -std=c99 ./input.c ./output.c

And I got follow error:

Traceback (most recent call last):
  File "/home/osanve/PycharmProjects/MyProject/src/main.py", line 118, in <module>
    ast = parser.parse(text, filename='<none>')
  File "/usr/lib/python2.7/site-packages/pycparser/c_parser.py", line 138, in parse
    debug=debuglevel)
  File "/usr/lib/python2.7/site-packages/pycparser/ply/yacc.py", line 265, in parse
    return self.parseopt_notrack(input,lexer,debug,tracking,tokenfunc)
  File "/usr/lib/python2.7/site-packages/pycparser/ply/yacc.py", line 1047, in parseopt_notrack
    tok = self.errorfunc(errtoken)
  File "/usr/lib/python2.7/site-packages/pycparser/c_parser.py", line 1613, in p_error
    column=self.clex.find_tok_column(p)))
  File "/usr/lib/python2.7/site-packages/pycparser/plyparser.py", line 54, in _parse_error
    raise ParseError("%s: %s" % (coord, msg))
pycparser.plyparser.ParseError: /usr/lib/gcc/x86_64-redhat-linux/4.8.2/include/stdarg.h:40:27: before: __gnuc_va_list

Repeated Struct typedef generates an error

This generates an error:

typedef const struct Ntf_FBlock_L_Type
{
int FBlockIndex;
int NumDev;
int PtrPropTab;
} TNtfFBlockL, *pTNtfFBlockL;

typedef const struct Ntf_FBlock_L_Type
{
int FBlockIndex;
int NumDev;
int PtrPropTab;
} TNtfFBlockL, *pTNtfFBlockL;

(it is repeated on purpose) However a repeated typedef like this:

typedef int INTEGER;
typedef int INTEGER;

does not generate any error nor this:

typedef const struct Ntf_FBlock_L_Type
{
int FBlockIndex;
int NumDev;
int PtrPropTab;
} TNtfFBlockL;;

typedef const struct Ntf_FBlock_L_Type
{
int FBlockIndex;
int NumDev;
int PtrPropTab;
} TNtfFBlockL;

So it just generates an error in the first case, it just throws a "ParserError before: ,"

What can I do to avoid this situation?

PyCParser Parsing error on __attribute__ ((

Eli,

I am trying to use pycparser to do static testing on embedded C. I am using pycparser v2.10.

The target code can be compiled by gcc and a cpp @ccargs.txt i2cm_5410x.c (testing against a master i2c driver) seems to be OK. I have created a configuration file which seems to be working fine (ccargs.txt). In it, I have the include files and defines that are required to compile the module. All this seems to be working fine.

When I run my modified version of cpp_libc.py, I receive a parsing error. Here is the traceback.

Traceback (most recent call last):
File "cpp_libc.py", line 27, in
cpp_args=r'@ccargs.txt')
File "build\bdist.win32\egg\pycparser__init__.py", line 93, in parse_file
File "build\bdist.win32\egg\pycparser\c_parser.py", line 138, in parse
File "build\bdist.win32\egg\pycparser\ply\yacc.py", line 265, in parse
File "build\bdist.win32\egg\pycparser\ply\yacc.py", line 1047, in parseopt_notrack
File "build\bdist.win32\egg\pycparser\c_parser.py", line 1613, in p_error
File "build\bdist.win32\egg\pycparser\plyparser.py", line 54, in _parse_error
pycparser.plyparser.ParseError: ../inc/core_cmInstr.h:325:16: before: (

The function it is complaining about is this:
attribute( ( always_inline ) ) __STATIC_INLINE void __NOP(void)
{
__ASM volatile ("nop");
}

Specifically, the first line (attribute) at the second open "("... Any idea what may be causing this?

Thanks,

ACV

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.