GithubHelp home page GithubHelp logo

openinterpreter / 01 Goto Github PK

View Code? Open in Web Editor NEW
4.5K 82.0 431.0 26.46 MB

The open-source language model computer

Home Page: http://openinterpreter.com/01

License: GNU Affero General Public License v3.0

Python 71.38% Rust 1.82% C++ 17.02% TypeScript 9.72% JavaScript 0.06%

01's Introduction

Discord

The open-source language model computer.

Preorder the Light‎ ‎ |‎ ‎ Get Updates‎ ‎ |‎ ‎ Documentation


OI-O1-BannerDemo-2

We want to help you build. Apply for 1-on-1 support.


Important

This experimental project is under rapid development and lacks basic safeguards. Until a stable 1.0 release, only run this repository on devices without sensitive information or access to paid services.

A substantial rewrite to address these concerns and more, including the addition of RealtimeTTS and RealtimeSTT, is occurring here.


The 01 Project is building an open-source ecosystem for AI devices.

Our flagship operating system can power conversational devices like the Rabbit R1, Humane Pin, or Star Trek computer.

We intend to become the GNU/Linux of this space by staying open, modular, and free.


Software

git clone https://github.com/OpenInterpreter/01 # Clone the repository
cd 01/software # CD into the source directory
brew install portaudio ffmpeg cmake # Install Mac OSX dependencies
poetry install # Install Python dependencies
export OPENAI_API_KEY=sk... # OR run `poetry run 01 --local` to run everything locally
poetry run 01 # Runs the 01 Light simulator (hold your spacebar, speak, release)

The RealtimeTTS and RealtimeSTT libraries in the incoming 01-rewrite are thanks to the state-of-the-art voice interface work of Kolja Beigel. Please star those repos and consider contributing to / utilizing those projects!

Hardware

  • The 01 Light is an ESP32-based voice interface. Build instructions are here. A list of what to buy here.
  • It works in tandem with the 01 Server (setup guide below) running on your home computer.
  • Mac OSX and Ubuntu are supported by running poetry run 01 (Windows is supported experimentally). This uses your spacebar to simulate the 01 Light.
  • (coming soon) The 01 Heavy is a standalone device that runs everything locally.

We need your help supporting & building more hardware. The 01 should be able to run on any device with input (microphone, keyboard, etc.), output (speakers, screens, motors, etc.), and an internet connection (or sufficient compute to run everything locally). Contribution Guide →


What does it do?

The 01 exposes a speech-to-speech websocket at localhost:10001.

If you stream raw audio bytes to / in Streaming LMC format, you will receive its response in the same format.

Inspired in part by Andrej Karpathy's LLM OS, we run a code-interpreting language model, and call it when certain events occur at your computer's kernel.

The 01 wraps this in a voice interface:


LMC

Protocols

LMC Messages

To communicate with different components of this system, we introduce LMC Messages format, which extends OpenAI’s messages format to include a "computer" role:

LMC.mov

Dynamic System Messages

Dynamic System Messages enable you to execute code inside the LLM's system message, moments before it appears to the AI.

# Edit the following settings in i.py
interpreter.system_message = r" The time is {{time.time()}}. " # Anything in double brackets will be executed as Python
interpreter.chat("What time is it?") # It will know, without making a tool/API call

Guides

01 Server

To run the server on your Desktop and connect it to your 01 Light, run the following commands:

brew install ngrok/ngrok/ngrok
ngrok authtoken ... # Use your ngrok authtoken
poetry run 01 --server --expose

The final command will print a server URL. You can enter this into your 01 Light's captive WiFi portal to connect to your 01 Server.

Local Mode

poetry run 01 --local

If you want to run local speech-to-text using Whisper, you must install Rust. Follow the instructions given here.

Customizations

To customize the behavior of the system, edit the system message, model, skills library path, etc. in i.py. This file sets up an interpreter, and is powered by Open Interpreter.

Ubuntu Dependencies

sudo apt-get install portaudio19-dev ffmpeg cmake

Contributors

01 project contributors

Please see our contributing guidelines for more details on how to get involved.


Roadmap

Visit our roadmap to see the future of the 01.


Background

The story of devices that came before the 01.

Things we want to steal great ideas from.


01's People

Contributors

abdullah-gohar avatar arthurbnhm avatar benxu3 avatar birbbit avatar dagmawibabi avatar dheavy avatar eltociear avatar gibru avatar hpsaturn avatar human-bee avatar imajeetyadav avatar killianlucas avatar koganei avatar kubla avatar leopere avatar lincolnmroth avatar llathieyre avatar martinmf avatar mikebirdtech avatar rbrisita avatar rudrodip avatar shivenmian avatar sunwood-ai-labs avatar tashaskyup avatar tomchapin avatar tyfiero avatar vgel avatar yuan-manx avatar zabirauf avatar zachwe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

01's Issues

HTTP Error 404 after selecting a model in --local mode

prompt> poetry run 01 --server --local
proceeds to model selection. After selecting a local LLM, the server crashes with the error: urllib.error.HTTPError: HTTP Error 404: Not Found.
This error occurs with both Ollama and LM Studio.

Log:

[?] Which one would you like to use?:
 > Ollama
   LM Studio

3 Ollama models found. To download a new model, run ollama run <model-name>, then start a new 01 session.

For a full list of downloadable models, check out https://ollama.com/library

[?] Select a downloaded Ollama model:
   failed
   NAME
 > llama2


Using Ollama model: llama2

Exception in thread Thread-13 (run_until_complete):
Traceback (most recent call last):
  File "C:\Users\Martin\anaconda3\envs\01\Lib\threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "C:\Users\Martin\anaconda3\envs\01\Lib\threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Martin\anaconda3\envs\01\Lib\asyncio\base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  [...]
  File "C:\Users\Martin\anaconda3\envs\01\Lib\urllib\request.py", line 496, in _call_chain
    result = func(*args)
             ^^^^^^^^^^^
  File "C:\Users\Martin\anaconda3\envs\01\Lib\urllib\request.py", line 643, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.

image

prompt>ollama list
NAME                    ID              SIZE    MODIFIED
codellama:latest        8fdf8f752f6e    3.8 GB  18 minutes ago

OS: Windows 11

If audio is not picked up by mic, message["content"] is empty or "", program throws error

When the audio is not picked up and the message["content"] is empty, it throws this error

Traceback (most recent call last):
  File "/Users/rds_agi/code/01-forked/software/source/server/server.py", line 220, in listener
    audio_file_path = bytes_to_wav(message["content"], mime_type)
  File "/Users/rds_agi/code/01-forked/software/source/server/utils/bytes_to_wav.py", line 55, in bytes_to_wav
    with export_audio_to_wav_ffmpeg(audio_bytes, mime_type) as wav_file_path:
  File "/opt/homebrew/Cellar/[email protected]/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/Users/rds_agi/code/01-forked/software/source/server/utils/bytes_to_wav.py", line 26, in export_audio_to_wav_ffmpeg
    f.write(audio)
TypeError: a bytes-like object is required, not 'str'

If we look at the code where it happens, its server.py file 214 line, there is no checking for empty string ""
issue

Incorrect Default Websocket Port in ESP Firmware

Describe the bug
The default websocket port in the esp32 client firmware is 80, should be setup to use the same port as the ping.

webSocket.begin(server_domain, 80, "/");

I'm working on adding support for a new devkit I have on hand (ESP32-S3-BOX-Lite) so I'll have a larger PR ready for that soon but wanted to get this seen hopefully before first round consumer devices get flashed! (If this is even the codebase ya'll are using on the prod devices).

Love the project 🧡

Teach Mode Improvements

  • Make it a computer tool, accessible with voice commands
  • Better Messaging between steps
  • Improved System Message (Access to skills)
  • Improved Point Model (click here, not there)
  • Ensure we can consistently replicate slack demo

Create an Android APK to run the client on an old android device we do not need

Is your feature request related to a problem? Please describe.
I live far away from the US, and I have an old fully working android phone that I would love to use as an alternative more bulky client for the server.

Describe the solution you'd like
A simple APK that runs the app always and does the same thing as the O1 hardware.

Describe alternatives you've considered
I do not have time to go down the hardware rabbit hole, and don't want to wait for other batches to ship to Europe.

Additional context
My old phone is just waiting for this to make me 100x more productive. So many things I would like to use my voice to do on my computer while working on thinking about stuff, tell it to load my dev env for a particular project, check if pods are running correctly on a particular kube cluster, or read my emails. My phone will stay plugged and on my desk, it will be my full time assistant and coworker, and what a great circular-economy like thing with the useful re-purposing of an old device.

IMG_202403 22_005436_HDR

Lastly, APKs are super easy to install, no need for app stores.

This will also open-up the project to all the poor people in the third world who will never be able to afford to buy or build the device. Having lived in Africa, I know all too well how important that is for many young people there. This is huge IMHO!

AttributeError: module 'os' has no attribute 'uname'.

Running poetry run 01 --server --local results in the following error (on Windows):

(01) C:\Users\Martin\01\software>poetry run 01 --server --local


○

Starting...



▌ 01 is compatible with several local model providers.

[?] Which one would you like to use?:
 > Ollama
   LM Studio

5 Ollama models found. To download a new model, run ollama run <model-name>, then start a new 01 session.

For a full list of downloadable models, check out https://ollama.com/library

[?] Select a downloaded Ollama model:
   failed
   NAME
   custom_model_0
 > llama2
   mixtral


Using Ollama model: llama2

Exception in thread Thread-13 (run_until_complete):
Traceback (most recent call last):
  File "C:\Users\Martin\anaconda3\envs\01\Lib\threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "C:\Users\Martin\anaconda3\envs\01\Lib\threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Martin\anaconda3\envs\01\Lib\asyncio\base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "C:\Users\Martin\01\software\source\server\server.py", line 413, in main
    service_instance = ServiceClass(config)
                       ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Martin\01\software\source\server\services\tts\piper\tts.py", line 11, in __init__
    self.install(config["service_directory"])
  File "C:\Users\Martin\01\software\source\server\services\tts\piper\tts.py", line 37, in install
    OS = os.uname().sysname
         ^^^^^^^^
AttributeError: module 'os' has no attribute 'uname'. Did you mean: 'name'?
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.

Slightly more convenient angles please <3 v1 body

Describe the bug
The whole setup is nice. As far as I can see, there are just some details, like the angles. If it could be 45º angles, more printers could print without support on the outer rim, guaranteeing more printers could do the threading.

Expected behavior
Just that easier printing is always nice for everyone.

Screenshots
image

Desktop (please complete the following information):

  • AFK

Additional context
Everything is wonderful just nitpicking for usability reasons.

Issue launching 01 - MacOS

I'm unable to test 01 on my Mac.

After successfully running brew install portaudio ffmpeg and pip install 01OS, I ran 01 in my terminal

01
Starting server...
Server started as process 15164
Starting client...
client started as process 15165
bash: 01OS/clients/start.sh: No such file or directory
Temporarily skipping skills (OI 0.2.1, which is unreleased) so we can push to `pip`.
Starting `server.py`...
INFO:     Started server process [15164]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:8000 (Press CTRL+C to quit)
INFO:     ::1:57788 - "GET / HTTP/1.1" 404 Not Found
INFO:     ::1:57788 - "GET /favicon.ico HTTP/1.1" 404 Not Found

The terminal hangs here and only adds new log lines INFO: ::1:57792 - "GET / HTTP/1.1" 404 Not Found if I go to http://localhost:8000

I am not sure why 01OS/clients/start.sh: No such file or directory is occurring

Ubo Open Source Hardware

Just wanted to bring Ubo Pod OSS hardware to light in case it might be of an interest. It is based on Raspberry Pi with a polished UX. I have already made a few hundred of these but can ramp production if there is interest. The complete source code and design files are published below:

https://github.com/ubopod
https://getubo.com
https://hackaday.io/project/190742-ubo-pod-build-apps-with-rich-ux-on-raspberry-pi

I am currently working on Raspberry Pi 5 support with NVMe which is the way to go for running large LLM models.

Is non-NVIDIA supported? Requirements ?

Describe the bug
poetry install killed

  - Installing websockets (12.0)
  - Installing xmod (1.8.1)
zsh: killed     poetry install
 \uf312 \ue0b0 \uf07c ~/C/01/software \ue0b0 \uf113 \uf126 main \ue0b0 poetry install         \ue0b2 KILL \u2718 \ue0b2 5m 30s \uf252 
Installing dependencies from lock file

Package operations: 142 installs, 0 updates, 0 removals

  - Installing nvidia-cudnn-cu12 (8.9.2.26): Downloading... 99%
zsh: killed     poetry install

Desktop (please complete the following information):
CPU: 4x 1-core AMD Ryzen 7 7700 (-SMP-) speed: 3793 MHz
Kernel: 6.6.19-1-MANJARO x86_64 Up: 16m Mem: 3.41/3.83 GiB (89.0%)
Storage: 40 GiB (2026.8% used) Procs: 217 Shell: Zsh inxi: 3.3.33
Python 3.11.8

ALSA error

Describe the bug
A clear and concise description of what the bug is.ALSA lib pcm_dsnoop.c:601:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1032:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2664:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2664:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2664:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_oss.c:397:(_snd_pcm_oss_open) Cannot open device /dev/dsp
ALSA lib pcm_oss.c:397:(_snd_pcm_oss_open) Cannot open device /dev/dsp
ALSA lib confmisc.c:160:(snd_config_get_card) Invalid field card
ALSA lib pcm_usb_stream.c:482:(_snd_pcm_usb_stream_open) Invalid card 'card'
ALSA lib confmisc.c:160:(snd_config_get_card) Invalid field card
ALSA lib pcm_usb_stream.c:482:(_snd_pcm_usb_stream_open) Invalid card 'card'
ALSA lib pcm_dmix.c:1032:(snd_pcm_dmix_open) unable to open slave

Deploy the 01 server via https://redbean.dev/

Is your feature request related to a problem? Please describe.
No, I have yet to use the 01 server.

Describe the solution you'd like
However, after reading the mission statement, it seemed prudent to make sure the team was aware of redbean/cosmopolitan/llamafiles, considering the stated goal is to be runnable on any device; this appears to be a relatively simple solution that will provides a POSIX API for Linux + Mac + Windows + FreeBSD + NetBSD + OpenBSD. Presumably an integration would allow for rapid deployment onto nearly every type of PC and embedded device. See the Actually Portable Executable blogpost for more information on compatibility philosophy: https://justine.lol/ape.html

Describe alternatives you've considered
Not many so far. It was just a thought I had. Not sponsored or anything, I just think redbean's really cool, lol.

Additional context
If you wanted to get really crazy, I also work with https://monk.io on occasion, which provides a cloud-provider agnostic control plane for container orchestration and deployment. If we were to fit monkd and OI onto a redbean binary (not sure on the size of the engineering cost of this one, ofc) then we'd have, what...essentially an LMC node that could be deployed onto any cloud (or multiple clouds at once), on any hardware, by anyone that can manage running a binary file, that can also download/deploy anything that's got a dockerfile? Seems good.

Happy to chip in on this, I'm much less proficient with low-level languages than scripting ones but I'd like to change that. :)

01 backed Server Fleet / Remote Orchestration Across 1,000's of hosts

Is your feature request related to a problem? Please describe.
Not a problem but a curiosity I had while thinking about the direction of Open Interpreter and 01. 01 is clearly a consumer targeted project and it's really exciting to attach AI to our existing compute this way but what if we also considered the ability to tie in remote execution tools, configuration management tools and other devops related tooling into something like 01's task based /workflow execution model.

Describe the solution you'd like
I'm still working out the specifics and need to get familiar with the code more but effectively a fan out approach ontop of what 01 is already doing so that it could broadcast commands to multiple targets.

Additional context
It may be interesting in larger environments to have different modes of execution such as a p2p model for server execution.

Mostly opening this to solicit feedback/thinking from the community.

Cannot call "receive" once a disconnect message has been received.

(base) zhouxl@jiangliuerdeMacBook-Pro software % poetry run 01

Starting...

INFO: Started server process [60460]
INFO: Waiting for application startup.

Ready.

INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:10001 (Press CTRL+C to quit)
INFO: ('127.0.0.1', 56406) - "WebSocket /" [accepted]
INFO: connection open

Press the spacebar to start/stop recording. Press CTRL-C to exit.
Cannot call "receive" once a disconnect message has been received.
Recording started...
Recording stopped.

seems nobody make this bug, only me, at least , i do not find any same bug in the upload issues.
someone knows how to solve it.

Learning new things trick

Hi great 01 team,
The most amazing thing of 01 to me is the ability to learn new skills, which seems to be different from common LLMs. Since it’s an open source project, could you share some ideas for extending the memory of a model beyond its context window?
Thanks in advance

If audio is not picked up by mic, message["content"] is NoneType and throws error

Content of error

  File "/.../01/01OS/01OS/server/server.py", line 194, in listener
    if message["content"].lower().strip(".,! ") == "stop":
       ^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'

On line 172, there is a check if message == None: but it is possible that the message is not None but the message['content'] has no value. A check for this should be added

Application shutting down on startup

Describe the bug
The poetry install was OK. I ran poetry run 01 --local, starting message is displauyed and then it hangs and exits from PowerShell.

Error messages

(base) PS F:\katas\01\software> poetry run 01 --local

Starting...

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ F:\Katas\01\software\start.py:43 in run │
│ │
│ 40 │ │ │ local: bool = typer.Option(False, "--local", help="Use recommended local ser │
│ 41 │ │ ): │
│ 42 │ │
│ ❱ 43 │ _run( │
│ 44 │ │ server=server, │
│ 45 │ │ server_host=server_host, │
│ 46 │ │ server_port=server_port, │
│ │
│ ╭────────────── locals ──────────────╮ │
│ │ client = False │ │
│ │ client_type = 'auto' │ │
│ │ context_window = 2048 │ │
│ │ expose = False │ │
│ │ llm_service = 'litellm' │ │
│ │ llm_supports_functions = False │ │
│ │ llm_supports_vision = False │ │
│ │ local = True │ │
│ │ max_tokens = 4096 │ │
│ │ model = 'gpt-4' │ │
│ │ server = False │ │
│ │ server_host = '0.0.0.0' │ │
│ │ server_port = 10001 │ │
│ │ server_url = None │ │
│ │ stt_service = 'openai' │ │
│ │ temperature = 0.8 │ │
│ │ tts_service = 'openai' │ │
│ │ tunnel_service = 'ngrok' │ │
│ ╰────────────────────────────────────╯ │
│ │
│ F:\Katas\01\software\start.py:134 in _run │
│ │
│ 131 │ │ │ │ except FileNotFoundError: │
│ 132 │ │ │ │ │ client_type = "linux" │
│ 133 │ │ │
│ ❱ 134 │ │ module = importlib.import_module(f".clients.{client_type}.device", package='sour │
│ 135 │ │ client_thread = threading.Thread(target=module.main, args=[server_url]) │
│ 136 │ │ client_thread.start() │
│ 137 │
│ │
│ ╭────────────────────────────────────── locals ───────────────────────────────────────╮ │
│ │ client = True │ │
│ │ client_type = 'auto' │ │
│ │ context_window = 2048 │ │
│ │ expose = False │ │
│ │ handle_exit = <function run..handle_exit at 0x0000021AF5F5EA20> │ │
│ │ llm_service = 'litellm' │ │
│ │ llm_supports_functions = False │ │
│ │ llm_supports_vision = False │ │
│ │ local = True │ │
│ │ loop = │ │
│ │ max_tokens = 4096 │ │
│ │ model = 'gpt-4' │ │
│ │ server = True │ │
│ │ server_host = '0.0.0.0' │ │
│ │ server_port = 10001 │ │
│ │ server_thread = <Thread(Thread-11 (run_until_complete), started 25900)> │ │
│ │ server_url = '0.0.0.0:10001' │ │
│ │ stt_service = 'local-whisper' │ │
│ │ system_type = 'Windows' │ │
│ │ temperature = 0.8 │ │
│ │ tts_service = 'piper' │ │
│ │ tunnel_service = 'ngrok' │ │
│ ╰─────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ D:\Users\clt\anaconda3\Lib\importlib_init
.py:126 in import_module │
│ │
│ 123 │ │ │ if character != '.': │
│ 124 │ │ │ │ break │
│ 125 │ │ │ level += 1 │
│ ❱ 126 │ return _bootstrap._gcd_import(name[level:], package, level) │
│ 127 │
│ 128 │
│ 129 _RELOADING = {} │
│ │
│ ╭────────────── locals ──────────────╮ │
│ │ character = 'c' │ │
│ │ level = 1 │ │
│ │ name = '.clients.auto.device' │ │
│ │ package = 'source' │ │
│ ╰────────────────────────────────────╯ │
│ in _gcd_import:1204 │
│ ╭──────────────── locals ────────────────╮ │
│ │ level = 1 │ │
│ │ name = 'source.clients.auto.device' │ │
│ │ package = 'source' │ │
│ ╰────────────────────────────────────────╯ │
│ in find_and_load:1176 │
│ ╭──────────────────────── locals ────────────────────────╮ │
│ │ import
= <function _gcd_import at 0x0000021AD2423D80> │ │
│ │ module = <object object at 0x0000021AD2454050> │ │
│ │ name = 'source.clients.auto.device' │ │
│ ╰────────────────────────────────────────────────────────╯ │
│ in find_and_load_unlocked:1126 │
│ ╭────────────────────────── locals ──────────────────────────╮ │
│ │ import
= <function _gcd_import at 0x0000021AD2423D80> │ │
│ │ name = 'source.clients.auto.device' │ │
│ │ parent = 'source.clients.auto' │ │
│ │ parent_spec = None │ │
│ │ path = None │ │
│ ╰────────────────────────────────────────────────────────────╯ │
│ in _call_with_frames_removed:241 │
│ ╭────────────────────── locals ───────────────────────╮ │
│ │ args = ('source.clients.auto',) │ │
│ │ f = <function _gcd_import at 0x0000021AD2423D80> │ │
│ │ kwds = {} │ │
│ ╰─────────────────────────────────────────────────────╯ │
│ in _gcd_import:1204 │
│ ╭──────────── locals ─────────────╮ │
│ │ level = 0 │ │
│ │ name = 'source.clients.auto' │ │
│ │ package = None │ │
│ ╰─────────────────────────────────╯ │
│ in find_and_load:1176 │
│ ╭──────────────────────── locals ────────────────────────╮ │
│ │ import
= <function _gcd_import at 0x0000021AD2423D80> │ │
│ │ module = <object object at 0x0000021AD2454050> │ │
│ │ name = 'source.clients.auto' │ │
│ ╰────────────────────────────────────────────────────────╯ │
│ in find_and_load_unlocked:1140 │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ child = 'auto' │ │
│ │ import
= <function _gcd_import at 0x0000021AD2423D80> │ │
│ │ name = 'source.clients.auto' │ │
│ │ parent = 'source.clients' │ │
│ │ parent_module = <module 'source.clients' from │ │
│ │ 'F:\Katas\01\software\source\clients\init.py'> │ │
│ │ parent_spec = ModuleSpec(name='source.clients', │ │
│ │ loader=<_frozen_importlib_external.SourceFileLoader object at │ │
│ │ 0x0000021AF5F68CD0>, │ │
│ │ origin='F:\Katas\01\software\source\clients\init.py', │ │
│ │ submodule_search_locations=['F:\Katas\01\software\source\clients']) │ │
│ │ path = ['F:\Katas\01\software\source\clients'] │ │
│ │ spec = None │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'source.clients.auto'
Exception in thread Thread-11 (run_until_complete):
Traceback (most recent call last):
File "D:\Users\clt\anaconda3\Lib\threading.py", line 1045, in _bootstrap_inner
self.run()
File "D:\Users\clt\anaconda3\Lib\threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "D:\Users\clt\anaconda3\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "F:\Katas\01\software\source\server\server.py", line 414, in main
service_instance = ServiceClass(config)
^^^^^^^^^^^^^^^^^^^^
File "F:\Katas\01\software\source\server\services\tts\piper\tts.py", line 12, in init
self.install(config["service_directory"])
File "F:\Katas\01\software\source\server\services\tts\piper\tts.py", line 38, in install
OS = os.uname().sysname
^^^^^^^^
AttributeError: module 'os' has no attribute 'uname'. Did you mean: 'name'?
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.

Desktop (please complete the following information):

  • OS: Windows 11
  • Python Version 3.11.18

All the best !
Pierre-Emmanuel

OSError: [Errno 86] Bad CPU type in executable:

**'/Users/TDS/Library/Application
Support/01/services/tts/piper/piper/piper'

Traceback (most recent call last):
File "/Users/TDS/01/software/source/server/server.py", line 281, in
listener
await stream_tts_to_device(sentence)
File "/Users/TDS/01/software/source/server/server.py", line 333, in
stream_tts_to_device
for chunk in stream_tts(sentence):
File "/Users/TDS/01/software/source/server/server.py", line 338, in
stream_tts
audio_file = tts(sentence)
^^^^^^^^^^^^^
File
"/Users/TDS/01/software/source/server/services/tts/piper/tts.py", line
19, in tts
subprocess.run([
File
"/Users/TDS/.pyenv/versions/3.11.0/lib/python3.11/subprocess.py", line
546, in run
with Popen(*popenargs, kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/TDS/.pyenv/versions/3.11.0/lib/python3.11/subprocess.py", line
1022, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File
"/Users/TDS/.pyenv/versions/3.11.0/lib/python3.11/subprocess.py", line
1899, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)

brew install portaudio ffmpeg cmake # Install Mac OSX dependencies
poetry install # Install Python dependencies
export OPENAI_API_KEY=sk... # OR run poetry run 01 --local to run everything locally
poetry run 01 # Runs the 01 Light simulator (hold your spacebar, speak, release)

after I release the space bar I get the answer in text but then I get the above error instead of TTS.

Expected behavior
I expected to hear the voice speak the words that came back from the API response.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: MacOS Somoma 14.4
  • Python Version [3.11]

Additional context
same exact program from the same exact install works fine on my Mac mini M1 with 16 gigs of ram

The Poetry configuration is invalid: - Additional properties are not allowed ('group' was unexpected)

Describe the bug
Upon running poetry install command, the following error is encountered:

  RuntimeError

  The Poetry configuration is invalid:
    - Additional properties are not allowed ('group' was unexpected)
  

  at /usr/lib/python3/dist-packages/poetry/core/factory.py:43 in create_poetry
       39│             message = ""
       40│             for error in check_result["errors"]:
       41│                 message += "  - {}\n".format(error)
       42│ 
    →  43│             raise RuntimeError("The Poetry configuration is invalid:\n" + message)
       44│ 
       45│         # Load package
       46│         name = local_config["name"]
       47│         version = local_config["version"]

To Reproduce
Steps to reproduce the behavior:

  1. Run poetry install command.
  2. Encounter the RuntimeError with the provided traceback.

Expected behavior
The poetry install command should execute without errors and install the necessary dependencies specified in the pyproject.toml file.

Screenshots
N/A

Desktop (please complete the following information):

  • OS: Ubuntu 20.04 LTS
  • Python Version: 3.10.12
  • Poetry Version: 1.1.12

Additional context
The issue seems to stem from an unexpected property 'group' in the Poetry configuration, which leads to a RuntimeError during the execution of the poetry install command. Further investigation into the pyproject.toml file might be necessary to identify and rectify the invalid configuration.

Wiring Diagram doesn't show reason for the Audio Amp? Only shows it as power source for the Dev Kit

Is your feature request related to a problem? Please describe.
When I look at the wiring diagram here:
https://github.com/OpenInterpreter/01/blob/main/hardware/light/Labeled%20Wiring%20Diagram.png

It seems to only be using the Audio Amp as a power source for the Echo Smart Speaker (only connected to gnd and 5v). Surely that is not what you meant right? Seems like a waste to just have it setup as a switch for turning the battery power on and off.

Describe the solution you'd like
If the Echo already has a mic and speaker you dont need the amp. If you do need the amp then output from the echo to the Amp is needed as well as wiring to a speaker? There probably doesnt need to be a labeled AND and unlabeled wiring diagram in the repo. Surely the labeled is all that is required.

cant install pip simpleaudio

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for simpleaudio
Running setup.py clean for simpleaudio
Failed to build simpleaudio
ERROR: Could not build wheels for simpleaudio, which is required to install pyproject.toml-based projects

Llamafile fails loading if the llamafile_path has a space within it

Describe the bug
When you run 01 with --llm-service llamafile and the llamafile is in a directory with a path that has a space within it you'll get an error:

OSError: [Errno 8] Exec format error: '/Users/sixtenk/Library/Application Support/01/services/llm/llamafile/models/phi-2.Q4_K_M.llamafile'

The same error has been encountered in the Open Interpreter repository before

To Reproduce
I'm unsure how you can choose which directory 01 will look for llamafiles in. Disregarding that, the steps to reproduce the behavior are:

  1. Make sure that llamafiles are in a directory with a path that has a space within, e.g. .../Application Support/...
  2. Run 01 with llamafile as llm service: poetry run 01 --llm-service llamafile
  3. See error

Expected behavior
The llamafile should load as normal.

Desktop (please complete the following information):

  • OS: MacOS 14.4
  • Python Version 3.11

Local model

Running the server with --model ollama/xxx does not seems to work. It's still using gpt, on open interpreter it works.

`UnboundLocalError` and unable to override `interpreter.llm.model`

There are two issues within software/source/server/i.py:

1. UnboundLocalError

The os module is imported at the top level and within the configure_interpreter function, which results in an UnboundLocalError error. You can reproduce this by using os.getenv within the configure_interpreter function.

File "/Users/jcp/Development/01/software/source/server/i.py", line 193, in configure_interpreter
    interpreter.llm.model = os.getenv("MODEL", "gpt-4")
                            ^^
UnboundLocalError: cannot access local variable 'os' where it is not associated with a value

2. interpreter.llm.model value is hardcoded

interpreter.llm.model is hardcoded to "gpt-4." From what I can tell, this makes it impossible to fully use 01 locally. When you run poetry run 01 --local, you'll get this error:

openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

I can submit a PR that fixes the above by letting users pass in the local model via --model or a LLM_MODEL environment variable.

If there's interest, I can also submit a separate PR to make 01 configurable via environment variables, command-line arguments, and a config.yaml file.

OS support

Is your feature request related to a problem? Please describe.
The os support for the server is not expressly mentioned.

Describe the solution you'd like
The readme should include what is expected to to work with the current code base.

Additional contex.*
A roadmap or mention of when other os support for the server besides mac is planned would be useful.

feat: Add full CLI support

I know, that a lot of use-cases will revolve around "Open this Application, do that, etc" but I don't get why 01OS at the moment only supports Mac and Linux.
The audio, display and command-line complications may be a reason to support only unix types right now, but I don't see any reason why a CLI only Client and Server shouldn't be possible. At the moment it's quite painful to run this in a WSL2 Ubuntu env, I guess installing and suffering through pulse-audio may be the way.

If I didn't read the documentation enough, I'm sorry if it's already possible to do that.

Update link to Piper download

Describe the bug
When running 01 locally, piper fails to download due to a bad url.

To Reproduce
Steps to reproduce the behavior:

  1. on macOS, run poetry run 01 --local in the /software dir without piper model downloaded
  2. See error

Expected behavior
01 is supposed to go and download piper via the defined URL:

PIPER_URL = "https://github.com/rhasspy/piper/releases/latest/download/"

However I am not aware if GitHub supports these types of URLs any longer. For instance, https://github.com/rhasspy/piper/releases/latest/ will redirect you to the page on GitHub for the latest release. Adding download to that URL will redirect to a GitHub 404.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • macOS
  • python3.11

Possibly suggest the housing be printed out of TPU and remove the cutout for the button OR improve the button cutout.

V1 iteration request
Is your feature request related to a problem? Please describe.
Just found that printing this with PLA I have no idea how the button is actually pressable, this could very much be fixed if the top were printed out of TPU or the button cutout is a touch longer.

Describe the solution you'd like
I would prefer printing instructions because I just kind of winged it with some higher quality blue stuff I have laying around but it felt like I shouldn't be able to actuate any buttons with the top.

Additional context
image

Problem running 01 --local

I ran 01 --local for the first time. It went through the install process and then prompted me to press spacebar to start recording. After recording stopped, I got this error:

I am on a Macbook pro

ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers
  built with Apple clang version 15.0.0 (clang-1500.1.0.2.5)
  configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/6.1.1_3 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenvino --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon
  libavutil      58. 29.100 / 58. 29.100
  libavcodec     60. 31.102 / 60. 31.102
  libavformat    60. 16.100 / 60. 16.100
  libavdevice    60.  3.100 / 60.  3.100
  libavfilter     9. 12.100 /  9. 12.100
  libswscale      7.  5.100 /  7.  5.100
  libswresample   4. 12.100 /  4. 12.100
  libpostproc    57.  3.100 / 57.  3.100
[aist#0:0/pcm_s16le @ 0x135705910] Guessed Channel Layout: mono
Input #0, wav, from '/var/folders/7z/qhl_78mx1sz4yjtmfy1md90w0000gn/T/audio_20240224172112420692.wav':
  Duration: 00:00:01.56, bitrate: 705 kb/s
  Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 1 channels, s16, 705 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '/var/folders/7z/qhl_78mx1sz4yjtmfy1md90w0000gn/T/output_stt_20240224172114115969.wav':
  Metadata:
    ISFT            : Lavf60.16.100
  Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s
    Metadata:
      encoder         : Lavc60.31.102 pcm_s16le
[out#0/wav @ 0x6000009d83c0] video:0kB audio:49kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.156677%
size=      49kB time=00:00:01.55 bitrate= 256.6kbits/s speed= 734x
Exception in thread Thread-7 (record_audio):
Traceback (most recent call last):
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/site-packages/01OS/clients/base_device.py", line 185, in record_audio
    text = stt_wav(wav_path)
           ^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/site-packages/01OS/server/stt/stt.py", line 112, in stt_wav
    transcript = get_transcription_file(output_path)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/site-packages/01OS/server/stt/stt.py", line 76, in get_transcription_file
    output, error = run_command([
                    ^^^^^^^^^^^^^
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/site-packages/01OS/server/stt/stt.py", line 66, in run_command
    result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/subprocess.py", line 548, in run
    with Popen(*popenargs, **kwargs) as process:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/opt/homebrew/anaconda3/envs/01/lib/python3.11/subprocess.py", line 1950, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/opt/homebrew/anaconda3/envs/01/lib/python3.11/site-packages/01OS/server/stt/whisper-rust/target/release/whisper-rust'

Not working for !Ubuntu linux

Describe the bug
Errors when running both server and client on linux.

01 --server

➜ 01 --server                


○                                                                                                                                                                            

Starting...                                                                                                                                                                  


INFO:     Started server process [247252]
INFO:     Waiting for application startup.
Task exception was never retrieved
future: <Task finished name='Task-6' coro=<put_kernel_messages_into_queue() done, defined at /home/karim/01/lib/python3.11/site-packages/01OS/server/utils/kernel.py:58> exception=FileNotFoundError(2, 'No such file or directory')>
Traceback (most recent call last):
  File "/home/karim/01/lib/python3.11/site-packages/01OS/server/utils/kernel.py", line 60, in put_kernel_messages_into_queue
    text = check_filtered_kernel()
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/karim/01/lib/python3.11/site-packages/01OS/server/utils/kernel.py", line 47, in check_filtered_kernel
    messages = get_kernel_messages()
               ^^^^^^^^^^^^^^^^^^^^^
  File "/home/karim/01/lib/python3.11/site-packages/01OS/server/utils/kernel.py", line 23, in get_kernel_messages
    with open('/var/log/dmesg', 'r') as file:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/var/log/dmesg'


Ready.                                                                                                                                                                       


INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

01 --client

➜ 01 --client
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/karim/01/lib/python3.11/site-packages/start.py:84 in run                                   │
│                                                                                                  │
│   81 │   │   │   │   except FileNotFoundError:                                                   │
│   82 │   │   │   │   │   client_type = "linux"                                                   │
│   83 │   │                                                                                       │
│ ❱ 84 │   │   module = importlib.import_module(f".clients.{client_type}.device", package='01OS    │
│   85 │   │   client_thread = threading.Thread(target=module.main, args=[server_url])             │
│   86 │   │   client_thread.start()                                                               │
│   87                                                                                             │
│                                                                                                  │
│ ╭──────────────────────────────────── locals ────────────────────────────────────╮               │
│ │                 client = True                                                  │               │
│ │            client_type = 'linux'                                               │               │
│ │         context_window = 2048                                                  │               │
│ │                 expose = False                                                 │               │
│ │            handle_exit = <function run.<locals>.handle_exit at 0x7f5cfe207600> │               │
│ │            llm_service = 'litellm'                                             │               │
│ │ llm_supports_functions = False                                                 │               │
│ │    llm_supports_vision = False                                                 │               │
│ │                  local = False                                                 │               │
│ │             max_tokens = 4096                                                  │               │
│ │                  model = 'gpt-4'                                               │               │
│ │                 server = False                                                 │               │
│ │            server_host = '0.0.0.0'                                             │               │
│ │            server_port = 8000                                                  │               │
│ │             server_url = '0.0.0.0:8000'                                        │               │
│ │            stt_service = 'openai'                                              │               │
│ │            system_type = 'Linux'                                               │               │
│ │            temperature = 0.8                                                   │               │
│ │            tts_service = 'openai'                                              │               │
│ │         tunnel_service = 'bore'                                                │               │
│ ╰────────────────────────────────────────────────────────────────────────────────╯               │
│                                                                                                  │
│ /home/linuxbrew/.linuxbrew/opt/[email protected]/lib/python3.11/importlib/__init__.py:126 in           │
│ import_module                                                                                    │
│                                                                                                  │
│   123 │   │   │   if character != '.':                                                           │
│   124 │   │   │   │   break                                                                      │
│   125 │   │   │   level += 1                                                                     │
│ ❱ 126 │   return _bootstrap._gcd_import(name[level:], package, level)                            │
│   127                                                                                            │
│   128                                                                                            │
│   129 _RELOADING = {}                                                                            │
│                                                                                                  │
│ ╭────────────── locals ───────────────╮                                                          │
│ │ character = 'c'                     │                                                          │
│ │     level = 1                       │                                                          │
│ │      name = '.clients.linux.device' │                                                          │
│ │   package = '01OS'                  │                                                          │
│ ╰─────────────────────────────────────╯                                                          │
│ in _gcd_import:1204                                                                              │
│ ╭─────────────── locals ────────────────╮                                                        │
│ │   level = 1                           │                                                        │
│ │    name = '01OS.clients.linux.device' │                                                        │
│ │ package = '01OS'                      │                                                        │
│ ╰───────────────────────────────────────╯                                                        │
│ in _find_and_load:1176                                                                           │
│ ╭────────────────────── locals ──────────────────────╮                                           │
│ │ import_ = <function _gcd_import at 0x7f5cff94fd80> │                                           │
│ │  module = <object object at 0x7f5cff984050>        │                                           │
│ │    name = '01OS.clients.linux.device'              │                                           │
│ ╰────────────────────────────────────────────────────╯                                           │
│ in _find_and_load_unlocked:1126                                                                  │
│ ╭──────────────────────── locals ────────────────────────╮                                       │
│ │     import_ = <function _gcd_import at 0x7f5cff94fd80> │                                       │
│ │        name = '01OS.clients.linux.device'              │                                       │
│ │      parent = '01OS.clients.linux'                     │                                       │
│ │ parent_spec = None                                     │                                       │
│ │        path = None                                     │                                       │
│ ╰────────────────────────────────────────────────────────╯                                       │
│ in _call_with_frames_removed:241                                                                 │
│ ╭──────────────────── locals ─────────────────────╮                                              │
│ │ args = ('01OS.clients.linux',)                  │                                              │
│ │    f = <function _gcd_import at 0x7f5cff94fd80> │                                              │
│ │ kwds = {}                                       │                                              │
│ ╰─────────────────────────────────────────────────╯                                              │
│ in _gcd_import:1204                                                                              │
│ ╭──────────── locals ────────────╮                                                               │
│ │   level = 0                    │                                                               │
│ │    name = '01OS.clients.linux' │                                                               │
│ │ package = None                 │                                                               │
│ ╰────────────────────────────────╯                                                               │
│ in _find_and_load:1176                                                                           │
│ ╭────────────────────── locals ──────────────────────╮                                           │
│ │ import_ = <function _gcd_import at 0x7f5cff94fd80> │                                           │
│ │  module = <object object at 0x7f5cff984050>        │                                           │
│ │    name = '01OS.clients.linux'                     │                                           │
│ ╰────────────────────────────────────────────────────╯                                           │
│ in _find_and_load_unlocked:1140                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │         child = 'linux'                                                                      │ │
│ │       import_ = <function _gcd_import at 0x7f5cff94fd80>                                     │ │
│ │          name = '01OS.clients.linux'                                                         │ │
│ │        parent = '01OS.clients'                                                               │ │
│ │ parent_module = <module '01OS.clients' from                                                  │ │
│ │                 '/home/karim/01/lib/python3.11/site-packages/01OS/clients/__init__.py'>      │ │
│ │   parent_spec = ModuleSpec(name='01OS.clients',                                              │ │
│ │                 loader=<_frozen_importlib_external.SourceFileLoader object at                │ │
│ │                 0x7f5cfe21d950>,                                                             │ │
│ │                 origin='/home/karim/01/lib/python3.11/site-packages/01OS/clients/__init__.p… │ │
│ │                 submodule_search_locations=['/home/karim/01/lib/python3.11/site-packages/01… │ │
│ │          path = ['/home/karim/01/lib/python3.11/site-packages/01OS/clients']                 │ │
│ │          spec = None                                                                         │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named '01OS.clients.linux'

To Reproduce
Steps to reproduce the behavior:
On Fedora 39 latest (same one used by Linus Torvalds):

  1. Get the needed packages: sudo dnf install portaudio-devel ffmpeg cmake
  2. Create a python 3.11 env, as default is 3.12 and activate it: python3.11 -m venv 01 and source 01/bin/activate
  3. Prepare to install, as gcc is version 13, this is needed sudo ln -s $(which gcc) /usr/local/bin/gcc-11 also export CC=gcc and export CCACHE_DISABLE=1
  4. Install 01: pip install 01OS, this should complete successfully.
  5. Run the client and run the server to see the above errors.

Expected behavior
Both client and server should work when they are run.

Screenshots
(Same as the above errors)

Desktop (please complete the following information):

  • OS: Linux 6.7.9-200.fc39.x86_64 (Fedora 39 Gnome, wayland)
  • Python Version 3.11.8

Additional context
So impatient to try it!

computer.display.view() crashes 01

After 01 calls computer.display.view() it opens a screenshot of the screen, hangs, then crashes.
CleanShot 2024-03-22 at 00 23 36@2x
ignore the blue play button that's just speechify

Desktop:

  • Hardware: M1 Macbook Air 8 GB
  • OS: macOS Sonoma 14.3.1
  • Python Version 3.11.7

Console Output:

poetry run 01
The currently activated Python version 3.12.2 is not supported by the project (>=3.9,<3.12).
Trying to find and use a compatible version.
Using python3.11 (3.11.7)

Starting...

INFO: Started server process [1839]
INFO: Waiting for application startup.

Ready.

INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:10001 (Press CTRL+C to quit)
INFO: ('127.0.0.1', 63207) - "WebSocket /" [accepted]
INFO: connection open

Press the spacebar to start/stop recording. Press CTRL-C to exit.
Recording started...
Recording stopped.
audio/wav /var/folders/f5/sz18sylj3fs3kc_76_57ttjr0000gn/T/input_20240322002322198717.wav /var/folders/f5/sz18sylj3fs3kc_76_57ttjr0000gn/T/output_20240322002322200512.wav

View my computer display.

Alright, let's have a look at your display.

computer.display.view()

[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.

    Python Version: 3.11.7
    Pip Version: 24.0
    Open-interpreter Version: cmd: Open Interpreter 0.2.3 New Computer Update

, pkg: 0.2.3
OS Version and Architecture: macOS-14.3.1-arm64-arm-64bit
CPU Info: arm
RAM Info: 8.00 GB, used: 3.35, free: 0.12

    # Interpreter Info

    Vision: True
    Model: gpt-4-vision-preview
    Function calling: False
    Context window: 110000
    Max tokens: 4096

    Auto run: True
    API base: None
    Offline: False

    Curl output: Not local

    # Messages

    System Message: You are the 01, a screenless executive assistant that can complete any task.

When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task.
Run any code to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
Be concise. Your messages are being read aloud to the user. DO NOT MAKE PLANS. RUN CODE QUICKLY.
Try to spread complex tasks over multiple code blocks. Don't try to complex tasks in one go.
Manually summarize text.

DON'T TELL THE USER THE METHOD YOU'LL USE, OR MAKE PLANS. ACT LIKE THIS:


user: Are there any concerts in Seattle?
assistant: Let me check on that.

computer.browser.search("concerts in Seattle")
Upcoming concerts: Bad Bunny at Neumos...

It looks like there's a Bad Bunny concert at Neumos...

Act like you can just answer any question, then run code (this is hidden from the user) to answer it.
THE USER CANNOT SEE CODE BLOCKS.
Your responses should be very short, no more than 1-2 sentences long.
DO NOT USE MARKDOWN. ONLY WRITE PLAIN TEXT.

TASKS

Help the user manage their tasks.
Store the user's tasks in a Python list called tasks.
The user's current task list (it might be empty) is: {{ tasks }}
When the user completes the current task, you should remove it from the list and read the next item by running tasks = tasks[1:]\ntasks[0]. Then, tell the user what the next task is.
When the user tells you about a set of tasks, you should intelligently order tasks, batch similar tasks, and break down large tasks into smaller tasks (for this, you should consult the user and get their
permission to break it down). Your goal is to manage the task list as intelligently as possible, to make the user as efficient and non-overwhelmed as possible. They will require a lot of encouragement,
support, and kindness. Don't say too much about what's ahead of them— just try to focus them on each step at a time.

After starting a task, you should check in with the user around the estimated completion time to see if the task is completed.
To do this, schedule a reminder based on estimated completion time using the function schedule(message="Your message here.", start="8am"), WHICH HAS ALREADY BEEN IMPORTED. YOU DON'T NEED TO IMPORT THE
schedule FUNCTION. IT IS AVAILABLE. You'll receive the message at the time you scheduled it. If the user says to monitor something, simply schedule it with an interval of a duration that makes sense for
the problem by specifying an interval, like this: schedule(message="Your message here.", interval="5m")

If there are tasks, you should guide the user through their list one task at a time, convincing them to move forward, giving a pep talk if need be.

THE COMPUTER API

The computer module is ALREADY IMPORTED, and can be used for some tasks:

result_string = computer.browser.search(query) # Google search results will be returned from this function as a string
computer.calendar.create_event(title="Meeting", start_date=datetime.datetime.now(), end=datetime.datetime.now() + datetime.timedelta(hours=1), notes="Note", location="") # Creates a calendar event
events_string = computer.calendar.get_events(start_date=datetime.date.today(), end_date=None) # Get events between dates. If end_date is None, only gets events for start_date
computer.calendar.delete_event(event_title="Meeting", start_date=datetime.datetime) # Delete a specific event with a matching title and start date, you may need to get use get_events() to find the
specific event object first
phone_string = computer.contacts.get_phone_number("John Doe")
contact_string = computer.contacts.get_email_address("John Doe")
computer.mail.send("[email protected]", "Meeting Reminder", "Reminder that our meeting is at 3pm today.", ["path/to/attachment.pdf", "path/to/attachment2.pdf"]) # Send an email with a optional attachments
emails_string = computer.mail.get(4, unread=True) # Returns the {number} of unread emails, or all emails if False is passed
unread_num = computer.mail.unread_count() # Returns the number of unread emails
computer.sms.send("555-123-4567", "Hello from the computer!") # Send a text message. MUST be a phone number, so use computer.contacts.get_phone_number frequently here

Do not import the computer module, or any of its sub-modules. They are already imported.

DO NOT use the computer module for ALL tasks. Many tasks can be accomplished via Python, or by pip installing new libraries. Be creative!

GUI CONTROL (RARE)

You are a computer controlling language model. You can control the user's GUI.
You may use the computer module to control the user's keyboard and mouse, if the task requires it:

computer.display.view() # Shows you what's on the screen, returns a `pil_image` `in case you need it (rarely). **You almost always want to do this first!**
computer.keyboard.hotkey(" ", "command") # Opens spotlight
computer.keyboard.write("hello")
computer.mouse.click("text onscreen") # This clicks on the UI element with that text. Use this **frequently** and get creative! To click a video, you could pass the *timestamp* (which is usually written
on the thumbnail) into this.
computer.mouse.move("open recent >") # This moves the mouse over the UI element with that text. Many dropdowns will disappear if you click them. You have to hover over items to reveal more.
computer.mouse.click(x=500, y=500) # Use this very, very rarely. It's highly inaccurate
computer.mouse.click(icon="gear icon") # Moves mouse to the icon with that description. Use this very often
computer.mouse.scroll(-10) # Scrolls down. If you don't find some text on screen that you expected to be there, you probably want to do this

You are an image-based AI, you can see images.
Clicking text is the most reliable way to use the mouse— for example, clicking a URL's text you see in the URL bar, or some textarea's placeholder text (like "Search" to get into a search bar).
If you use plt.show(), the resulting image will be sent to you. However, if you use PIL.Image.show(), the resulting image will NOT be sent to you.
It is very important to make sure you are focused on the right application and window. Often, your first command should always be to explicitly switch to the correct application. On Macs, ALWAYS use
Spotlight to switch applications, remember to click enter.
When searching the web, use query parameters. For example, https://www.amazon.com/s?k=monitor

SKILLS

Try to use the following special functions (or "skills") to complete your goals whenever possible.
THESE ARE ALREADY IMPORTED. YOU CAN CALL THEM INSTANTLY.


{{
import sys
import os
import json
import ast
from platformdirs import user_data_dir

directory = os.path.join(user_data_dir('01'), 'skills')
if not os.path.exists(directory):
os.mkdir(directory)

def get_function_info(file_path):
with open(file_path, "r") as file:
tree = ast.parse(file.read())
functions = [node for node in tree.body if isinstance(node, ast.FunctionDef)]
for function in functions:
docstring = ast.get_docstring(function)
args = [arg.arg for arg in function.args.args]
print(f"Function Name: {function.name}")
print(f"Arguments: {args}")
print(f"Docstring: {docstring}")
print("---")

files = os.listdir(directory)
for file in files:
if file.endswith(".py"):
file_path = os.path.join(directory, file)
get_function_info(file_path)
}}

YOU can add to the above list of skills by defining a python function. The function will be saved as a skill.
Search all existing skills by running computer.skills.search(query).

Teach Mode

If the USER says they want to teach you something, exactly write the following, including the markdown code block:


One moment.

computer.skills.new_skill.create()

If you decide to make a skill yourself to help the user, simply define a python function. computer.skills.new_skill.create() is for user-described skills.

USE COMMENTS TO PLAN

IF YOU NEED TO THINK ABOUT A PROBLEM: (such as "Here's the plan:"), WRITE IT IN THE COMMENTS of the code block!


User: What is 432/7?
Assistant: Let me think about that.

# Here's the plan:
# 1. Divide the numbers
# 2. Round to 3 digits
print(round(432/7, 3))
61.714

The answer is 61.714.

MANUAL TASKS

Translate things to other languages INSTANTLY and MANUALLY. Don't ever try to use a translation tool.
Summarize things manually. DO NOT use a summarizer tool.

CRITICAL NOTES

Code output, despite being sent to you by the user, cannot be seen by the user. You NEED to tell the user about the output of some code, even if it's exact. >>The user does not have a screen.<<
ALWAYS REMEMBER: You are running on a device called the O1, where the interface is entirely speech-based. Make your responses to the user VERY short. DO NOT PLAN. BE CONCISE. WRITE CODE TO RUN IT.
Try multiple methods before saying the task is impossible. You can do it!

    {'role': 'user', 'type': 'message', 'content': 'View my computer display.\n'}

{'role': 'assistant', 'type': 'message', 'content': "Alright, let's have a look at your display.\n"}

{'role': 'assistant', 'type': 'code', 'format': 'python', 'content': '\ncomputer.display.view()\n'}

{'role': 'computer', 'type': 'console', 'format': 'output', 'content': ''}

{'role': 'computer', 'type': 'image', 'format': 'base64.png', 'content':
'iVBORw0KGgoAAAANSUhEUgAADSAAAAg0CAIAAACcJK5OAAAMP2lDQ1BJQ0MgUHJvZmlsZQAAeJyVVwdYU8kWnluSkJDQAghICb0JIlICSAmhBZDebYQkQCgxBoKKHVlUcC2oWMCGrooodpodsbMo9r5YUFDWxYJdeZMCuu4r35vvmzv//efMf86cO3PvHQDUT3DF4hxUA4B
cUb4kJtifkZScwiB1AwwQABV4ACaXlydmRUWFA1gG27+XdzcAImuvOsi0/tn/X4smX5DHAwCJgjiNn8fLhfggAHgVTyzJB4Ao...z2QyztWpkHt/B1IqDl+/emXirxew4BmduWmuI1hiIwLaPlw8KCNzi+3xaV6wxyEGYqG80Ce7qkiqF05GdmRnwuKMqOsDBoyCeQFNqOdr
iPMadGzCoSA7bQxKjfqyukmmVIUagEK7M0nYXVPCsIG5rbmGPEJeYrrkAM/+ch+3W8a/cIWIiIEs81GyET3+MYZwUkfi4x912ov5uukQcHwdxWrA0wkuAutCS4BNFiY2HGCwS+hPYhJq7E9g8mygv1dySNCih76o/f98/hdZPnemudygzQAAAABJRU5ErkJggg=='}

{'role': 'computer', 'type': 'console', 'format': 'output', 'content': "Displayed on the user's machine."}

Traceback (most recent call last):
File "/Users/mac/Documents/GitHub/01/software/source/server/server.py", line 256, in listener
for chunk in interpreter.chat(messages, stream=True, display=True):
File "/Users/mac/Library/Caches/pypoetry/virtualenvs/01os-qZIXqCtQ-py3.11/lib/python3.11/site-packages/interpreter/core/core.py", line 196, in _streaming_chat
yield from terminal_interface(self, message)
File "/Users/mac/Library/Caches/pypoetry/virtualenvs/01os-qZIXqCtQ-py3.11/lib/python3.11/site-packages/interpreter/terminal_interface/terminal_interface.py", line 136, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/Users/mac/Library/Caches/pypoetry/virtualenvs/01os-qZIXqCtQ-py3.11/lib/python3.11/site-packages/interpreter/core/core.py", line 235, in _streaming_chat
yield from self._respond_and_store()
File "/Users/mac/Library/Caches/pypoetry/virtualenvs/01os-qZIXqCtQ-py3.11/lib/python3.11/site-packages/interpreter/core/core.py", line 281, in _respond_and_store
for chunk in respond(self):
File "/Users/mac/Library/Caches/pypoetry/virtualenvs/01os-qZIXqCtQ-py3.11/lib/python3.11/site-packages/interpreter/core/respond.py", line 69, in respond
for chunk in interpreter.llm.run(messages_for_llm):
File "/Users/mac/Library/Caches/pypoetry/virtualenvs/01os-qZIXqCtQ-py3.11/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 97, in run
messages = convert_to_openai_messages(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/Library/Caches/pypoetry/virtualenvs/01os-qZIXqCtQ-py3.11/lib/python3.11/site-packages/interpreter/core/llm/utils/convert_to_openai_messages.py", line 173, in convert_to_openai_messages
new_message["content"] = new_message["content"].strip()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'strip'
^Czsh: killed poetry run 01

Portaudio Import - M2 Sonoma not working

I've tried nearly anything to get the portaudio working, this is the error I get:

(01os-py3.9) (base) spence@Peri-XL software % poetry run 01 Warning: '01' is an entry point defined in pyproject.toml, but it's not installed as a script. You may get impropersys.argv[0]`.

The support to run uninstalled scripts will be removed in a future release.

Run poetry install to resolve and get rid of this message.

Starting...

Could not import the PyAudio C module 'pyaudio._portaudio'.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/spence/Library/CloudStorage/Dropbox/Github/01/software/start.py:44 in run │
│ │
│ 41 │ │ │ local: bool = typer.Option(False, "--local", help="Use recommended local ser │
│ 42 │ │ ): │
│ 43 │ │
│ ❱ 44 │ _run( │
│ 45 │ │ server=server, │
│ 46 │ │ server_host=server_host, │
│ 47 │ │ server_port=server_port, │
│ │
│ ╭────────────── locals ──────────────╮ │
│ │ client = False │ │
│ │ client_type = 'auto' │ │
│ │ context_window = 2048 │ │
│ │ expose = False │ │
│ │ llm_service = 'litellm' │ │
│ │ llm_supports_functions = False │ │
│ │ llm_supports_vision = False │ │
│ │ local = False │ │
│ │ max_tokens = 4096 │ │
│ │ model = 'gpt-4' │ │
│ │ server = False │ │
│ │ server_host = '0.0.0.0' │ │
│ │ server_port = 10001 │ │
│ │ server_url = None │ │
│ │ stt_service = 'openai' │ │
│ │ temperature = 0.8 │ │
│ │ tts_service = 'openai' │ │
│ │ tunnel_service = 'ngrok' │ │
│ ╰────────────────────────────────────╯ │
│ │
│ /Users/spence/Library/CloudStorage/Dropbox/Github/01/software/start.py:136 in _run │
│ │
│ 133 │ │ │ │ except FileNotFoundError: │
│ 134 │ │ │ │ │ client_type = "linux" │
│ 135 │ │ │
│ ❱ 136 │ │ module = importlib.import_module(f".clients.{client_type}.device", package='sour │
│ 137 │ │ client_thread = threading.Thread(target=module.main, args=[server_url]) │
│ 138 │ │ client_thread.start() │
│ 139 │
│ │
│ ╭──────────────────────────────────────── locals ─────────────────────────────────────────╮ │
│ │ client = True │ │
│ │ client_type = 'mac' │ │
│ │ context_window = 2048 │ │
│ │ expose = False │ │
│ │ handle_exit = <function _run..handle_exit at 0x7f87a120c550> │ │
│ │ llm_service = 'litellm' │ │
│ │ llm_supports_functions = False │ │
│ │ llm_supports_vision = False │ │
│ │ local = False │ │
│ │ loop = <_UnixSelectorEventLoop running=True closed=False debug=False> │ │
│ │ max_tokens = 4096 │ │
│ │ model = 'gpt-4' │ │
│ │ server = True │ │
│ │ server_host = '0.0.0.0' │ │
│ │ server_port = 10001 │ │
│ │ server_thread = <Thread(Thread-9, started 13056811008)> │ │
│ │ server_url = '0.0.0.0:10001' │ │
│ │ stt_service = 'openai' │ │
│ │ system_type = 'Darwin' │ │
│ │ temperature = 0.8 │ │
│ │ tts_service = 'openai' │ │
│ │ tunnel_service = 'ngrok' │ │
│ ╰─────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /Users/spence/opt/anaconda3/lib/python3.9/importlib/init.py:127 in import_module │
│ │
│ 124 │ │ │ if character != '.': │
│ 125 │ │ │ │ break │
│ 126 │ │ │ level += 1 │
│ ❱ 127 │ return _bootstrap._gcd_import(name[level:], package, level) │
│ 128 │
│ 129 │
│ 130 _RELOADING = {} │
│ │
│ ╭───────────── locals ──────────────╮ │
│ │ character = 'c' │ │
│ │ level = 1 │ │
│ │ name = '.clients.mac.device' │ │
│ │ package = 'source' │ │
│ ╰───────────────────────────────────╯ │
│ in _gcd_import:1030 │
│ ╭─────────────── locals ────────────────╮ │
│ │ level = 1 │ │
│ │ name = 'source.clients.mac.device' │ │
│ │ package = 'source' │ │
│ ╰───────────────────────────────────────╯ │
│ in find_and_load:1007 │
│ ╭────────────────────── locals ──────────────────────╮ │
│ │ import
= <function _gcd_import at 0x7f87d0098310> │ │
│ │ module = <object object at 0x7f87d0070060> │ │
│ │ name = 'source.clients.mac.device' │ │
│ ╰────────────────────────────────────────────────────╯ │
│ in find_and_load_unlocked:986 │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ import
= <function _gcd_import at 0x7f87d0098310> │ │
│ │ name = 'source.clients.mac.device' │ │
│ │ parent = 'source.clients.mac' │ │
│ │ parent_module = <module 'source.clients.mac' from │ │
│ │ '/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clien… │ │
│ │ path = [ │ │
│ │ │ │ │
│ │ '/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clien… │ │
│ │ ] │ │
│ │ spec = ModuleSpec(name='source.clients.mac.device', │ │
│ │ loader=<_frozen_importlib_external.SourceFileLoader object at │ │
│ │ 0x7f8770920880>, │ │
│ │ origin='/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/sourc… │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ in _load_unlocked:680 │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ module = <module 'source.clients.mac.device' from │ │
│ │ '/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/mac/… │ │
│ │ spec = ModuleSpec(name='source.clients.mac.device', │ │
│ │ loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8770920880>, │ │
│ │ origin='/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clien… │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ in exec_module:850 │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ code = <code object at 0x7f87709287c0, file │ │
│ │ "/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/mac/… │ │
│ │ line 1> │ │
│ │ module = <module 'source.clients.mac.device' from │ │
│ │ '/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/mac/… │ │
│ │ self = <_frozen_importlib_external.SourceFileLoader object at 0x7f8770920880> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ in _call_with_frames_removed:228 │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ args = ( │ │
│ │ │ <code object at 0x7f87709287c0, file │ │
│ │ "/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/mac/de… │ │
│ │ line 1>, │ │
│ │ │ { │ │
│ │ │ │ 'name': 'source.clients.mac.device', │ │
│ │ │ │ 'doc': None, │ │
│ │ │ │ 'package': 'source.clients.mac', │ │
│ │ │ │ 'loader': <_frozen_importlib_external.SourceFileLoader object at │ │
│ │ 0x7f8770920880>, │ │
│ │ │ │ 'spec': ModuleSpec(name='source.clients.mac.device', │ │
│ │ loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8770920880>, │ │
│ │ origin='/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients… │ │
│ │ │ │ 'file': │ │
│ │ '/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/mac'+1… │ │
│ │ │ │ 'cached': │ │
│ │ '/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/mac'+3… │ │
│ │ │ │ 'builtins': { │ │
│ │ │ │ │ 'name': 'builtins', │ │
│ │ │ │ │ 'doc': 'Built-in functions, exceptions, and other │ │
│ │ objects.\n\nNoteworthy: None is the `nil'+46, │ │
│ │ │ │ │ 'package': '', │ │
│ │ │ │ │ 'loader': <class '_frozen_importlib.BuiltinImporter'>, │ │
│ │ │ │ │ 'spec': ModuleSpec(name='builtins', loader=<class │ │
│ │ '_frozen_importlib.BuiltinImporter'>, origin='built-in'), │ │
│ │ │ │ │ 'build_class': , │ │
│ │ │ │ │ 'import': , │ │
│ │ │ │ │ 'abs': , │ │
│ │ │ │ │ 'all': , │ │
│ │ │ │ │ 'any': , │ │
│ │ │ │ │ ... +142 │ │
│ │ │ │ } │ │
│ │ │ } │ │
│ │ ) │ │
│ │ f = │ │
│ │ kwds = {} │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/mac/device.py:1 in │
│ │
│ │
│ ❱ 1 from ..base_device import Device │
│ 2 │
│ 3 device = Device() │
│ 4 │
│ │
│ /Users/spence/Library/CloudStorage/Dropbox/Github/01/software/source/clients/base_device.py:8 in │
│ │
│ │
│ 5 import asyncio │
│ 6 import threading │
│ 7 import os │
│ ❱ 8 import pyaudio │
│ 9 from starlette.websockets import WebSocket │
│ 10 from queue import Queue │
│ 11 from pynput import keyboard │
│ │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ asyncio = <module 'asyncio' from │ │
│ │ '/Users/spence/opt/anaconda3/lib/python3.9/asyncio/init.py'> │ │
│ │ load_dotenv = <function load_dotenv at 0x7f87a05a6310> │ │
│ │ os = <module 'os' from '/Users/spence/opt/anaconda3/lib/python3.9/os.py'> │ │
│ │ threading = <module 'threading' from │ │
│ │ '/Users/spence/opt/anaconda3/lib/python3.9/threading.py'> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /Users/spence/Library/CloudStorage/Dropbox/Github/01/software/.venv/lib/python3.9/site-packages/ │
│ pyaudio/init.py:111 in │
│ │
│ 108 import warnings │
│ 109 │
│ 110 try: │
│ ❱ 111 │ import pyaudio._portaudio as pa │
│ 112 except ImportError: │
│ 113 │ print("Could not import the PyAudio C module 'pyaudio._portaudio'.") │
│ 114 │ raise │
│ │
│ ╭────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ locale = <module 'locale' from '/Users/spence/opt/anaconda3/lib/python3.9/locale.py'> │ │
│ │ warnings = <module 'warnings' from '/Users/spence/opt/anaconda3/lib/python3.9/warnings.py'> │ │
│ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError:
dlopen(/Users/spence/Library/CloudStorage/Dropbox/Github/01/software/.venv/lib/python3.9/site-packages/pyaudio/_portaudio.cpython-39-darwin.so,
0x0002): symbol not found in flat namespace '_PaMacCore_SetupChannelMap'
INFO: Started server process [51556]
INFO: Waiting for application startup.

Ready.

INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:10001 (Press CTRL+C to quit)
`

I've tried:

https://stackoverflow.com/questions/68251169/unable-to-install-pyaudio-on-m1-mac-portaudio-already-installed/68296168#68296168

https://discussions.apple.com/thread/252638887?sortBy=best

https://www.deskriders.dev/posts/1671901033-installing-pyaudio-mac/

Nothing seems to work, even trying in a separate env.

This happens when running the poetry run 01

add monitor

i think we should have monitor. LCD 2.1 with some emotion face.

Web Agents that give O1 Browser Control

Is your feature request related to a problem? Please describe.
It would be amazing if O1 can control browser. I use email and almost everything as a webapp. Browser control opens a whole new paradigm of things O1 can do.

Describe the solution you'd like
An implementations of web agent.

Additional context
None

consistent Cross-Platform App/GUI

Is your feature request related to a problem? Please describe.
I've seen many people ask for frontends/apps here and in other places for specific devices and OS. In your Roadmap you also mention a secluded app which I think is problematic and not scaleable.

Describe the solution you'd like
A cross-platform framework like Google's Flutter is a very good solution imo. I made my own experiences with it and it's super easy and the results are awesome.

Describe alternatives you've considered
Theres also other similar approaches like Kivy but I think Flutter is the best at the time of writing.

Additional context
flutter.dev
tutorial
this one is also pretty cool to mess around with and learn about possible widgets etc: https://flutterflow.io/

Slightly thicker walls the bottom was pretty easy to inadvertantly squish and brake.

Is your feature request related to a problem? Please describe.
I just pressed an accidental hole in the bottom of the V1 when I was messing with my print. It seems this could be remedied if the dome was made, perhaps maybe .3 of a mm thicker.

Describe the solution you'd like
I apparently have powerlifter hands and wrecked my print, and the durability wasn't super confidence-inspiring. If it were mass-produced, it might be prone to breaking fairly easily.

Additional context
I don't have my phone for this one nearby, but the print, being a series of circular layers, doesn't benefit much from variation in layer direction for strength, so it's pretty easy to break.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.