GithubHelp home page GithubHelp logo

bufferapp / buda-protobufs Goto Github PK

View Code? Open in Web Editor NEW
0.0 7.0 0.0 149 KB

Repository for Buffer Unified Data Architecture protocol buffers.

License: MIT License

Makefile 25.37% Python 12.94% Shell 61.69%
protobuf schemas buda

buda-protobufs's People

Contributors

davidgasquez avatar djfarrelly avatar hharnisc avatar mallen5311 avatar michael-erasmus avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

buda-protobufs's Issues

Perhaps rename 'TrackXXX` rpc's to `CollectXXX`?

Hey @davidgasquez!

Just a small one, but I was curious what you think about perhaps changing our naming convention from using the prefix Track to Collect instead for the EventsCollector service rpc's?

Feels more in line that the EventsCollector will Collect as a verb, and I like that naming more than 'tracking'.

Would love your thoughts!

Purpose and scope of this repository

Hey there @michael-erasmus and @djfarrelly! I'd love to provide more context about this repo and what we could use it for. πŸ˜„ Please don't hesitate to leave comments or feedback!

  • The entities folder could contains basic protobuf files describing some of our entities (updates, visits, ...). The services folder could act as an inventory of all Buffer services using gRPC.
  • This repository will act as a communication layer between microservices/teams and also as a reference of the current entities schema. For example, when writing the update.proto definition I had several questions that could be discussed in this repository.
  • Compiling the protobufs is left to the specific consumers. They can have this repo as a git submodule fixed to a specific commit or they could even build them manually and save the generated code (I think we should expect forward compatibility).

Not sure if I already shared it but I took some ideas from a Namely article about gRPC.

I know it might be quite early to discuss this and I want to keep this dynamic. Happy to make any changes or even remove the repository if it doesn't feel useful enough.

Add the url query parameter to visits

We do track some specific query parameters (utm_ parameters), but it could be cool to add all parameters sent to the URL too.

This could maybe be implemented as a map:

map<string, string> query_params = 99;

`product` or `application` field on signup events

After chatting with Tyler about starting to track Reply data in Buda, it seems like we might need to add an additional field to signup events to be able to filter on which product the signup happened (Publish or Reply, for now)

Once we do have a single signup service that all the multiple products can start using, we should hopefully be able to track this in one place, but it will probably still be useful to think about a user being able to 'signup' for multiple products.

cc @djfarrelly Would love your thoughts on this as well!

Automate npm release

Followup from #12.

Would be awesome to also have npm automatically published with changes on master.

Add `experiments` to visits

This is something we have discussed before, the idea would be to add an Experiment message and have an experiments field that would be a list of experiments associated with the user_id or visitor_id on a visit.

This could be then implemented along side the experiments implementation on the marketing-site.

This would be awesome for Growth and Marketing teams once it's modelled in Looker

cc @djfarrelly @davidgasquez

Travis deployment fails

Hi @davidgasquez!

I'm seeing issues when pushing new tagged release, it seems like the deployment on Travis is failing with the following error:

HTTPError: 403 Client Error: Invalid or non-existent authentication information. for url: https://upload.pypi.org/legacy/

Would you happen to know how to fix this?

Flatten directory structure issue

Hi @davidgasquez!

Would love your help on this one! I've been working with @djfarrelly on getting the JS client for the gRPC service going and he's hit some snags with how we have structured the .proto files, which makes it hard for him to generate the JS client code.

He asked if we could flatten the directory structure so that we move from

buda-protobufs
β”‚   README.md
β”‚   ... 
└───buda
β”‚   └───entities
β”‚       β”‚   foo.proto
β”‚       β”‚   bar.proto
β”‚       β”‚   ...
β”‚   └───services
β”‚       β”‚   foo_service.proto
β”‚       β”‚   ...

to

buda-protobufs
β”‚   README.md
β”‚   ... 
└───buda
β”‚   β”‚   foo.proto
β”‚   β”‚   bar.proto
β”‚   β”‚   foo_service.proto
β”‚   β”‚   ...

We also want to update the include directive in the proto files from something like import "buda/entities/funnel.proto" to just import "funnel.proto"

I've gone ahead and created a new branch and made those changes to the proto files and dir structures in these commits:

87f6cb3

4ddaf77

However, I'm running into issues getting the make package-python command to work with this setup. I've tried a bunch of things to no avail. After playing around with the $PYTHON_DIR, $PROTO_DIR, $ENTITIES vars and even trying loading the volumes differently, I still can't seem to get it to get protoc to run.

Would you mind trying it out and seeing if I'm missing something?!

Only publish tagged commits

Idea

To minimize unnecessary publish commands run (which will fail due to duplicate packages), we could limit our deploy's to using only tagged commits: https://docs.travis-ci.com/user/deployment/npm/#What-to-release

The workflow would be

# Update the version file
echo "1.0.4" > VERSION
# Add the version commit
git commit -m "1.0.4"
# tag it
git tag 1.0.4
# push it and the tags
git push --tags

That could be pushed to master or to the branch then rebased into master.

What do you think @davidgasquez?

Add the domain to visits

Right now we can't see on which domain a visit event was fired. It would be awesome to add this as a field

Automate releases

It could be awesome to automate new releases, perhaps to both the Python and Node.js packages.

Right now we can create a python package .tar.gz file that can be uploaded manually as a release. We use the VERSION file to assign the version name in the setup.py file and the compressed package name.

So far I've also created new Node packages by manually publishing them to npm with npm publish. I've bumped the version manually in the packages/node/package.json file, and the Python and Node packages have different versions.

I can see it being useful to eventually have a full CI solution, but perhaps automating the process locally first would be a good first step?

Here is how I propose we could perhaps do it:

  • We create a make packages command that will:

    • Build the Python and Node packages
    • Use the VERSION files to bump the version on both packages (not sure if it's feasible for the Node and Python packages to share a version number?)
  • We create a make publish command that will:

    • Create a new release on Github with the correct version tag
    • Upload the Python package .tar.gz to the release
    • Upload the Node package as a .tar.gz to the release (not sure if this is needed?)
    • Publish the npm package to npmjs
    • Publish the Python package to PyPI?

Would love your thoughts @davidgasquez and @djfarrelly! Definitely open to any other approaches too!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.