Name: Jim Lloyd
Type: User
Bio: Seasoned engineer with diverse experience in software development. Worked for Apple, eBay, Google, and a variety of startups, including Silver Tail Systems.
Location: San Francisco, California, USA
Blog: https://www.linkedin.com/in/jimlloyd/
Jim Lloyd's Projects
A table top with playing cards. Happens to also be green.
Locally run an Instruction-Tuned Chat-Style LLM
Bluebird is a full featured promise library with unmatched performance.
A throwaway repository to illustrate a bug in either [email protected], or node-java's use of bluebird.
Build a Cloud Run compliant container for serving tensorflow prediction model
β© Continue is an open-source autopilot for VS Code and JetBrainsβthe easiest way to code with any LLM
Quick T3 Stack with SvelteKit for rapid deployment of highly performant typesafe web apps.
A siteswap juggling animator using d3 and html5 canvas
A simple framework for d3 canvas 'simulations'
Export TypeScript .d.ts files as an external module definition
Pygments lexer for Gherkin
Review topic branch commits by files changed
High Performance NATS Server
A minimalistic JavaScript Gremlin Server/Rexster client
Implementation of Gremlin for node.js
Documentation generation, in the spirit of literate programming.
The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)
C++14 header-only easy-to-use cryptographic hash library
Interpolate json through handlebars to rendered text
A node.js client for HeartsNN
Build a Jekyll blog in minutes, without touching the command line.
deterministic JSON.stringify() with custom sorting to get deterministic hashes from stringified results
A multi-platform siteswap juggling animator
A blazing fast and lightweight C asymmetric coroutine library π β
πβ
π
Serial Port Programming in C++
A wrapper around Readable producing a pausable, line-by-line readable stream.
Inference code for LLaMA models
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). It provides a simple yet robust interface using llama-cpp-python, allowing users to chat with LLM models, execute structured function calls and get structured output.