GithubHelp home page GithubHelp logo

spark's Introduction

Spark

Spark is an Ansible playbook meant to provision a personal machine running Arch Linux. It is intended to run locally on a fresh Arch install (ie, taking the place of any post-installation), but due to Ansible's idempotent nature it may also be run on top of an already configured machine.

Spark assumes it will be run on a laptop and performs some configuration based on this assumption. This behaviour may be changed by removing the laptop role from the playbook or by skipping the laptop tag.

If Spark is run on either a ThinkPad or a MacBook, it will detect this and execute platform-specific tasks.

Note: If you would like to try recreating all the tasks that are currently included in the ansible playbook, through a VM, you would need a disk of at least 16GB in size.

Running

First, sync mirrors and install Ansible:

$ pacman -Syy python-passlib ansible

Second, install and update the submodules:

$ git submodule init && git submodule update

Run the playbook as root.

# ansible-playbook -i localhost playbook.yml

When run, Ansible will prompt for the user password. This only needs to be provided on the first run when the user is being created. On later runs, providing any password -- whether the current user password or a new one -- will have no effect.

SSH

By default, Ansible will attempt to install the private SSH key for the user. The key should be available at the path specified in the ssh.user_key variable. Removing this variable will cause the key installation task to be skipped.

SSHD

If ssh.enable_sshd is set to True the systemd socket service will be enabled. By default, sshd is configured but not enabled.

Dotfiles

Ansible expects that the user wishes to clone dotfiles via the git repository specified via the dotfiles.url variable and install them with rcm. The destination to clone the repository to is defined by the dotfiles.destination variable. This is relative the user's home directory.

These tasks will be skipped if the dotfiles variable is not defined.

Tagging

All tasks are tagged with their role, allowing them to be skipped by tag in addition to modifying playbook.yml.

AUR

All tasks involving the AUR are tagged aur. To provision an AUR-free system, pass this tag to ansible's --skip-tag.

AUR packages are installed via the ansible-aur module. Note that while yay, an AUR helper, is installed by default, it will not be used during any of the provisioning.

Firejail

Many applications are sandboxed with Firejail. This behavior should be largely invisible to the user.

Custom security profiles are provided for certain applications. These are installed to /usr/local/etc/firejail. Firejail does not look in this directory by default. To use the security profiles, they must either be specified on the command-line or included in an appropriately named profile located in ~/.config/firejail.

# Example 1:
# Launch Firefox using the custom profile by specifying the full path of the profile.
$ firejail --profile=/usr/local/etc/firejail/firefox.profile /usr/bin/firefox
# Example 2:
# Launch Firefox using the custom profile by specifying its directory.
$ firejail --profile-path=/usr/local/etc/firejail /usr/bin/firefox
# Example 3:
# Include the profile  in ~./config/firejail
$ mkdir -p ~/.config/firejail
$ echo 'include /usr/local/etc/firejail/firefox.profile' > ~/.config/firejail/firefox.profile
$ firejail /usr/bin/firefox

The script profile-activate is provided to automatically include the profiles when appropriate. For every profile located in /usr/local/etc/firejail, the script looks for a profile with the same name in ~/.config/firejail. If one is not found, it will create a profile that simply includes the system profile, as in the third example above. It will not modify any existing user profiles.

Blacklisting

The firejail.blacklist variable is used to populate /etc/firejail/globals.local with a list of blacklisted files and directories. This file is included by all security profiles, causing the specified locations to be inaccessible to jailed programs.

MAC Spoofing

By default, the MAC address of all network interfaces is spoofed at boot, before any network services are brought up. This is done with macchiato, which uses legitimate OUI prefixes to make the spoofing less recognizable.

MAC spoofing is desirable for greater privacy on public networks, but may be inconvenient on home or corporate networks where a consistent (if not real) MAC address is wanted for authentication. To work around this, allow macchiato to randomize the MAC on boot, but tell NetworkManager to clone the real (or a fake but consistent) MAC address in its profile for the trusted networks. This can be done in the GUI by populating the "Cloned MAC address" field for the appropriate profiles, or by setting the cloned-mac-address property in the profile file at /etc/NetworkManager/system-connections/.

Spoofing may be disabled entirely by setting the network.spoof_mac variable to False.

Trusted Networks

The trusted network framework provided by nmtrust is leveraged to start certain systemd units when connected to trusted networks, and stop them elsewhere.

This helps to avoid leaking personal information on untrusted networks by ensuring that certain network tasks are not running in the background. Currently, this is used for mail syncing (see the section below on Syncing and Scheduling Mail), Tarsnap backups (see the section below on Scheduling Tarsnap), BitlBee (see the section below on BitlBee), and git-annex (see the section below on git-annex).

Trusted networks are defined using their NetworkManager UUIDs, configured in the network.trusted_uuid list. NetworkManager UUIDs may be discovered using nmcli con.

Mail

Receiving Mail

Receiving mail is supported by syncing from IMAP servers via both isync and OfflineIMAP. By default isync is enabled, but this can be changed to OfflineIMAP by setting the value of the mail.sync_tool variable to offlineimap.

Sending Mail

msmtp is used to send mail. Included as part of msmtp's documentation are a set of msmtpq scripts for queuing mail. These scripts are copied to the user's path for use. When calling msmtpq instead of msmtp, mail is sent normally if internet connectivity is available. If the user is offline, the mail is saved in a queue, to be sent out when internet connectivity is again available. This helps support a seamless workflow, both offline and online.

System Mail

If the email.user variable is defined, the system will be configured to forward mail for the user and root to this address. Removing this variable will cause no mail aliases to be put in place.

The cron implementation is configured to send mail using msmtp.

Syncing and Scheduling Mail

A shell script called mailsync is included to sync mail, by first sending any mail in the msmtp queue and then syncing with the chosen IMAP servers via either isync or OfflineIMAP. The script will also attempt to sync contacts and calendars via vdirsyncer. To disable this behavior, set the mail.sync_pim variable to False.

Before syncing, the mailsync script checks for internet connectivity using NetworkMananger. mailsync may be called directly by the user, ie by configuring a hotkey in Mutt.

A systemd timer is also included to periodically call mailsync. The timer is set to sync every 5 minutes (configurable through the mail.sync_time variable).

The timer is not started or enabled by default. Instead, the timer is added to /etc/nmtrust/trusted_units, causing the NetworkManager trusted unit dispatcher to activate the timer whenever a connection is established to a trusted network. The timer is stopped whenever the network goes down or a connection is established to an untrusted network.

To have the timer activated at boot, change the mail.sync_on variable from trusted to all.

If the mail.sync_on variable is set to anything other than trusted or all, the timer will never be activated.

Tarsnap

Tarsnap is installed with its default configuration file. However, setting up Tarsnap is left as an exercise for the user. New Tarsnap users should register their machine and generate a key. Existing users should recover their key(s) and cache directory from their backups (or, alternatively, recover their key(s) and rebuild the cache directory with tarsnap --fsck).

Tarsnapper is installed to manage backups. A basic configuration file to backup /etc is included. Tarsnapper is configured to look in /usr/local/etc/tarsnapper.d for additional jobs. As with with the Tarsnap key and cache directory, users should recover their jobs files from backups after the Tarsnapper install is complete. See the Tarsnapper documentation for more details.

Running Tarsnap

A systemd unit file and timer are included for Tarsnapper. Rather than calling it directly, the systemd unit wraps Tarsnapper with backitup.

The timer is set to execute the unit hourly, but backitup will only call Tarsnapper once within the period defined in the tarsnapper.period variable. This defaults to DAILY. This increases the likelyhood of completing daily backups by checking each hour if the unit has run succesfully on the current calendar day.

In addition to the period limitation, backitup defaults to only calling Tarsnapper when it detects the machine ison AC power. To allow Tarsnapper to run when on battery, set the tarsnapper.ac_only variable to False.

As with mailsync, the timer is not started or enabled by default. Instead, the timer is added to /etc/nmtrust/trusted_units, causing the NetworkManager trusted unit dispatcher to activate the timer whenever a connection is established to a trusted network. The timer is stopped whenever the network goes down or a connection is established to an untrusted network.

To have the timer activated at boot, change the tarsnapper.run_on variable from trusted to all.

If the tarsnapper.run_on variable is set to anything other than trusted or all, the timer will never be activated.

Tor

Tor is installed by default. A systemd service unit for Tor is installed, but not enabled or started. instead, the service is added to /etc/nmtrust/trusted_units, causing the NetworkManager trusted unit dispatcher to activate the service whenever a connection is established to a trusted network. The service is stopped whenever the network goes down or a connection is established to an untrusted network.

To have the service activated at boot, change the tor.run_on variable from trusted to all.

If you do not wish to use Tor, simply remove the tor variable from the configuration.

parcimonie.sh

parcimonie.sh is provided to periodically refresh entries in the user's GnuPG keyring over the Tor network. The service is added to /etc/nmtrust/trusted_units and respects the tor.run_on variable.

BitlBee

BitlBee and WeeChat are used to provide chat services. A systemd service unit for BitlBee is installed, but not enabled or started by default. Instead, the service is added to /etc/nmtrust/trusted_units, causing the NetworkManager trusted unit dispatcher to activate the service whenever a connection is established to a trusted network. The service is stopped whenever the network goes down or a connection is established to an untrusted network.

To have the service activated at boot, change the bitlbee.run_on variable from trusted to all.

If the bitlbee.run_on variable is set to anything other than trusted or all, the service will never be activated.

By default BitlBee will be configured to proxy through Tor. To disable this, remove the bitlebee.torify variable or disable Tor entirely by removing the tor variable.

git-annex

git-annex is installed for file syncing. A systemd service unit for the git-annex assistant is enabled and started by default. To prevent this, remove the gitannex variable from the config.

Additionally, the git-annex unit is added to /etc/nmtrust/trusted_units, causing the NetworkManager trusted unit dispatcher to activate the service whenever a connection is established to a trusted network. The service is stopped whenever a connection is established to an untrusted network. Unlike other units using the trusted network framework, the git-annex unit is also activated when there are no active network connections. This allows the git-annex assistant to be used when on trusted networks and when offline, but not when on untrusted networks.

If the gitannex.stop_on_untrusted variable is set to anything other than True or is not defined, the git-annex unit will not be added to the trusted unit file, resulting in the git-annex assistant not being stopped on untrusted networks.

PostgreSQL

PostgreSQL is installed and enabled by default. If the postgresql.enable variable is set to anything other than True or is not defined, the service will not be started or enabled.

This is intended for local development. PostgreSQL is configured to only listen on localhost and no additional ports are opened in the default firewall. This configuration means that PostgreSQL is not a network service. As such, the PostgreSQL service is not added to /etc/nmtrust/trusted_units.

Additional configuration options are set which improve performance but make the database service inappropriate for production use.

Himawaripy

Himawaripy is provided to fetch near-realtime photos of Earth from the Japanese Himawari 8 weather satellite and set them as the user's desktop background via feh. This should provide early warning of the presence of any Vogon constructor fleets appearing over the Eastern Hemisphere.

A systemd service unit and timer is installed, but not enabled or started by default. Instead, the service is added to /etc/nmtrust/trusted_units, causing the NetworkManager trusted unit dispatcher to activate the service whenever a connection is established to a trusted network. The service is stopped whenever the network goes down or a connection is established to an untrusted network.

To have the service activated at boot, change the himawaripy.run_on variable from trusted to all.

If the himawaripy.run_on variable is set to anything other than trusted or all, the service will never be activated.

By default the timer is scheduled to fetch a new image at 15 minute intervals. This can be changed by modifying the himawaripy.run_time variable.

By completely removing the himawaripy variable, no related tasks will be run.

spark's People

Contributors

awsumco avatar brett avatar elvetemedve avatar inomoz avatar jshuping avatar ohdearaugustin avatar petrushka avatar pigmonkey avatar raehik avatar stratosgear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark's Issues

zathura and firejail brken synctex functionality (backward search)

About synctex and backward search functionality:

Synctex is a tool that enables synchronization of the text editor position and
the pdf viewer position. The tool may be used to add mappings in vim to go to
the current position in the compiled pdf document (forward search), and also
to go from a specific position in the pdf file to the corresponding position
in vim (inverse/backward search).
...
In supported viewers one may set up backward or inverse search. When
correctly set up, this allows one to go directly from a selected line in the
viewer (typically by double clicking with the mouse or something similar) to
the corresponding line inside the VIM instance.

For example I'm using vim (+vimtex) and zathura together. Zathura should be able to run vim like this: vim --servername VIM --remote +10 my-file.tex to force VIM to place cursor on line 10.

It is prohibited in spark defaults because in default zathura.profile there isn't vim binary at all. The solution is to add this line: private-bin zathura,vim

P.S. Possibly it should be a little bit agile and support any editor binary not only VIM.

Grsecurity breaks VirtualBox

There doesn't seem to be any way around this short of building a custom kernel, which I do not want to do.

I use VirtualBox to spin up a handful of Debian-based images, mostly managed with Vagrant. Switching over to some sort of libvirt/kvm backend may be more productive than trying to get vbox to work.

Firejail itself probably not secure enough

There were several security issues found recently:

CVE-2017-5180: http://openwall.com/lists/oss-security/2017/01/04/1
tmpfs related (CVE-2016-10117, CVE-2016-10119), rewrite /etc/resolv.conf (CVE-2016-10118), full access to system files (CVE-2016-10120, CVE-2016-10121), root access (CVE-2016-10122, CVE-2016-10123): http://openwall.com/lists/oss-security/2017/01/05/4

And 2 more which are fixed by now: CVE-2017-5207, CVE-2017-5206.

It isn't about sandbox actually but if your user somehow vulnerable therefore (by means of firejail) your root is also vulnerable. That's If I got it right consequence of nature of firejail and of using suid mechanism.

Probably it is worth consideting to switch to some alternative. For example:

  1. bubblewrap
  2. Sandstorm.io
  3. Other tools. Some of them mentioned (with some comparison) in bubblewrap README.

SSH: option to generate ssh private key for user

Probably it is sensible to generate ssh private key if it isn't set.

With code like this:

 63 - name: Generate user SSH key
 64   user:
 65     name: "{{ user.name }}"
 66     generate_ssh_key: yes
 67     ssh_key_bits: 2048
 68     ssh_key_file: "/home/{{ user.name }}/.ssh/id_rsa"
 69   when: ssh.user_key is undefined

Udisks: why disable automounting and why mount to media?

Why do you disable automount? Is it for sequrity reason? If so please give some links where to read about it.

And about /media: do you change default behaviour (/var/ran/media) just for convinience or there is something else?

  5 - name: Disable udisks automounting                                                                                                     
...                                                                    
 11 - name: Mount to /media         

~/aur vs manual aura -Ayu

Hi again, wondering if you have any advice on what I'm doing wrong, or what secret you may have for this. Here is what I have done:

  • haven't run ansible in about 3 weeks
  • since then, I have run sudo aura -Ayu which has updated several pkgs
  • I noticed you had made some playbook changes, pulled those, ran ansible
  • and now a handful of aur packages got downgraded, since my ~/aur hadn't changed since 3 weeks ago:
    $  grep downgraded /var/log/pacman.log 
    [2015-10-27 09:02] [ALPM] downgraded chromium-pepper-flash (1:19.0.0.226-1 -> 1:19.0.0.207-1)
    [2015-10-27 09:02] [ALPM] downgraded firejail (0.9.32-1 -> 0.9.30-1)
    [2015-10-27 09:02] [ALPM] downgraded virtualbox-ext-oracle (5.0.8-1 -> 5.0.6-1)
    [2015-12-08 10:39] [ALPM] downgraded aura-bin (1.3.4-1 -> 1.3.2.1-1)

Am I incorrectly expecting the playbook to be fairly idempotent like how I use puppet, and run it multiple times? In other words are you only running ansible once on a new system?
Or are you updating your ~/aur somehow? Guess I could just nuke ~/aur, but I think I found another idempotency issue so I thought I'd ask this question. Cheers!

Unbound

Why do you install unbound by default? Could you please point to benefits of it in default distribution?

Add tool for preventing OOM (out of memory)

As it stated in nohang README:

OOM conditions may cause freezes, livelocks, drop caches and processes to be killed (via sending SIGKILL) instead of trying to terminate them correctly (via sending SIGTERM or takes other corrective action). Some applications may crash if it's impossible to allocate memory.

So it is very convinient to have one in playbook.

There are several options mentioned in nohang README.

May be nohang itself is a good option.

SSD support: enabling TRIM on all levels

According to article How to properly activate TRIM for your SSD on Linux: fstrim, lvm and dm-crypt

TRIM should be enabled in all configs expect for fstab:

  1. /etc/crypttab
  2. /etc/lvm/lvm.conf

Quote:

you must enable support for TRIM in these three layers: The filesystem, LVM and dm-crypt. There is no point in enabling it at the filesystem level if you don’t enable it also on the other layers. The TRIM command should be translated from one layer to another until reaching the SSD.

But according to spark there isn't any setup like this.

Proposed fingerprint reader setup brakes Ansible

If I have

fingerprint: True

in group_vars/all and setup fprintd and try to run ansible-playbook -i localhost playbook.yml --ask-become-pass I catch the error:

<localhost> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1680455171.562048-412582-219023120032225/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
    "msg": "timeout waiting for privilege escalation password prompt:\n"
}

Workaround

Install and setup pam_fprintd_grosshack as suggested in Arch wiki:

If you want to prompt for fingerprint and password input at the same time, you can use pam-fprint-grosshack. This may be needed for some graphical programs which do not allow blank password input, such as Gnome's built-in polkit agent.

auth		sufficient  	pam_fprintd_grosshack.so
auth		sufficient  	pam_unix.so try_first_pass nullok
...

P.S. Also fprintd with pam_fprintd.so brakes yay AUR helper. And pam_fprintd_grosshack fixes it.

sshd config, some notes

AllowTcpForwarding:

Qutation from man sshd_config:

The default is “yes”. Note that disabling TCP forwarding does not improve security unless users are also denied shell access, as they can always install their own forwarders

So probably it isn't necessary for setting it to no value.

And also could you please explain why you use this very Ciper: aes256-ctr and MACs: hmac-sha2-512. Why these and not something else (default for example).

Networkmanager

Please, explain why do you add following packages as dependencies for networkmanager:

- xfce4-notifyd                                                                                                          
- gnome-keyring 

Firejailed browsers dpends on xdg user dirs, which don't exist

Browser (Firefox, Chromium) default profiles include following line:

whitelist ${DOWNLOADS}

so it depends on existance of ${DOWNLOADS} dir. Otherwise you cannot actually download files with jailed browsers. Downloads folder is one of the XDG user dirs.

I propose to add following lines to x role:

- name: Install xdg-user-dirs 
  pacman: name=xdg-user-dirs state=present

- name: Create user directories
  shell: LC_ALL=C /usr/bin/xdg-user-dirs-update
  args:
    creates: /home/{{ user.name }}/Downloads 
  become: "{{ user.name }}"

Instead of LC_ALL=C one may place some configurable locale. Possibly somebody wants to have localized xdg dirs.

Ansible AUR module keep asking for sudo password

The problem

When packages from the AUR are being installed and the process takes longer than passwd_timeout of sudo, then I get a sudo password promt and the installation is paused. I would expect from Ansible to provide non-interactive provisioning process (do not ask for any info during the provisioning process, except at the very beginning or end).

If the AUR module could utilise the become feature of Ansible that would be great.

Actual result

I have to type the password of my user multiple times.

spark-install-aur-fonts

Expected result

Installation/uninstallation completes without prompting for password at all.

Consider arch-secure-boot

There is arch-secure-boot:

Highly opinionated setup that provides minimal Secure Boot for Arch Linux, and a few recovery tools.

According to Arch wiki:

Secure Boot is a feature of UEFI that allows authentication of the files your computer boots. This helps preventing some evil maid attacks such as replacing files inside the boot partition. Normally computers come with keys that are enrolled by vendors (OEM). However these can be removed and allow the computer to enter Setup Mode which allows the user to enroll and manage their own keys.

Task gnupg : Enable gpg-agent.socket failed

The problem

When provisioning a machine where the user does not exist, then Ansible fails at the task
gnupg : Enable gpg-agent.socket.

Actual output

fatal: [localhost]: FAILED! => {"changed": false, "cmd":/usr/bin/systemctl --user", "msg": "Failed ro connect to bus: No such file or directory", "rc": 1}

Expected output

ok: [localhost]

Additional info

The reason for this is that the gpg-agent.socket is going to be installed as a user service (opposed to system service) and for that systemctl --user needs to connect to the socket /run/user/< name-of-new-user >/bus, but that does not exist, because the current user is root and only the current user will get a socket created by default.

/etc/hosts / Unbound not respected by NetworkManager with DNS provided by DHCP

It seems that the NetworkManager config provided here doesn't utilize the Unbound server also installed, as NetworkManager simply overwrites /etc/resolv.conf if any DNS servers are provided by the DHCP server.

I looked into several possible solutions, but couldn't find one that seemed to stick. The farthest I got was installing and starting dnssec-trigger and setting dns=unbound in NetworkManager.conf, but it still resolved some of the sites given in /etc/hosts to the real IP address, rather than 0.0.0.0.

A bit stuck, hope you can help figure this out.

Cower fails to install

This is the information I have about the error:

fatal: [host1]: FAILED! => {
    "changed": false, 
    "failed": true, 
    "invocation": {
        "module_args": {
            "dir": null, 
            "name": "cower", 
            "skip_pgp": true, 
            "user": "someone"
        }
    }, 
    "msg": "failed to install package cower, because: ==> Making package: cower 17-2 (Thu Oct 19 14:09:03 EEST 2017)\n==> Checking runtime dependencies...\n==> Checking buildtime dependencies...\n==> Retrieving sources...\n  -> Found cower-17.tar.gz\n  -> Found cower-17.tar.gz.sig\n==> WARNING: Skipping verification of source file PGP signatures.\n==> Validating source files with md5sums...\n    cower-17.tar.gz ... Passed\n    cower-17.tar.gz.sig ... Skipped\n==> Extracting sources...\n  -> Extracting cower-17.tar.gz with bsdtar\n==> Removing existing $pkgdir/ directory...\n==> Starting build()...\n/bin/sh: pod2man: command not found\nmake: *** [Makefile:90: cower.1] Error 127\n==> ERROR: A failure occurred in build().\n    Aborting...\n"
}

I have also tried to install via yaourt and I get an error about PGP signatures:

ERROR: One or more PGP signatures could not be verified!
ERROR: Makepkg was unable to build cower

It seems like a cower issue and not spark's.

harfbuzz update + Infinality breaking System

Quotation from fix-infinality.md:

If you installed infinality-bundle or the patched freetype2-infinality(-ultimate) package, you'll most likely recently have run into an error relating to the harfbuzz package (>= 1.4.1-1), specifically something like: /usr/lib/libharfbuzz.so.0: undefined symbol: FT_Get_Var_Blend_Coordinates.

So there is a problem which break a lot of processes like compilation for example.

CC -o dwm
/usr/lib/libharfbuzz.so.0: undefined reference to `FT_Get_Var_Blend_Coordinates'
collect2: error: ld returned 1 exit status
make: *** [Makefile:29: dwm] Error 1

Literature by this topic:

  1. Workaround: https://gist.github.com/cryzed/e002e7057435f02cc7894b9e748c5671
  2. Reddit discussion: https://www.reddit.com/r/archlinux/comments/5mphpu/psa_harfbuzz_update_infinality_breaking_system/
  3. Another discussion in github issuses: polybar/polybar#310

Wormhole: missed AUR dependencies

It seems like AUR dependencies cannot be installed (or detected) via ansible-aur module.

Possible solutions:

  1. (dirty but easy one) list in ansible role all aur dependencies (at the moment) of wormhole by hands like this:
  2 - name: Install magic-wormhole
  3   aur: name={{ item }} user={{ user.name }}
  4   with_items:
  5     - python-tqdm
  6     - python-hkdf
  7     - python-pynacl
  8     - python-spake2
  9     - magic-wormhole
  1. make ansible-aur module to use yaourt (or something like this) which is capable to install AUR dependencies

Error itself:

< TASK [wormhole : Install magic-wormhole] >
 ------------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "failed to install package magic-wormhole, because: ==> Making package: magic-wormhole 0.8.1-3 (Wed Dec 28 19:27:56 AEDT 2016)\n==> Checkin
g runtime dependencies...\n==> Installing missing dependencies...\nerror: target not found: python-tqdm\nerror: target not found: python-hkdf\nerror: target not found: python-pynacl\nerror: target not found: pyth
on-spake2\n==> ERROR: 'pacman' failed to install missing dependencies.\n"}

Cannot install cower

There is absolutely fresh arch linux install (real hardware not virtual).

There is an error during installing cower:

TASK [base : Install cower] ****************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "failed to install package cower, because: ==> ERROR: Cannot find the strip binary required for object file stripping.\n"}
	to retry, use: --limit @/home/petrunin/tmp/spark/playbook.retry

Reason is missed binutils package.

pulseaudio-modules-bt is deprecated

According to discussion at pulseaudio-modules-bt-git, quote message at 2021-02-08:

Upstream is deprecated, see EHfive/pulseaudio-modules-bt#154. I am going to orphan this package.

You have 2 alternative options alongside with this:

PipeWire: A2DP (LDAC, AAC, aptX(-HD)) and HFP(mSBC) are working on stable release. (git package maintained by me: https://aur.archlinux.org/packages/pipewire-common-git/)

PulseAudio: A2DP (LDAC, AAC, aptX(-HD)) support is merged, no HFP support yet, but you need to use "git" package before a new stable version shipped

Set LICENSE so others know how to work with your code

Hello, thanks for sharing your ansible playbook and roles with the public!

As you encourage the reader in your blog, I'm considering to fork this project in the near future. Or at least get inspired from some of your tasks and roles. But your project does not have a license at all. Without a license it isn't clear what a user (me in this case) is allowed to do with your code. In the end it need to be considered proprietary and it is not allowed to re-use your code for other publicly shared projects.

Would it be possible to set a license for your project?

Some Ideas:

  • If you don't care what happens with your code at all and want to release it to the public domain, you could set UNLICENSE.
  • If you want to get a copyright notice to you in other projects, you could set MIT.
  • If you want that all future versions of your code get released as open source too, you could set GPL.

The easiest for people like me would be the UNLICENSE, because I don't need much about it. But I can understand if you want to get noted or ensure that forks are also shared as open source with the community.

broken pipe in pacman!

I am using manjaro linux. when I try to install any package with pacman, I get an error:

error: unable to write to pipe (Broken pipe)

what should I do to fix this error?

thank you.

sudo pacman -S maxima                                                                
resolving dependencies...
looking for conflicting packages...

Packages (1) maxima-5.46.0-5

Total Installed Size:  63.02 MiB
Net Upgrade Size:       0.00 MiB

:: Proceed with installation? [Y/n] 
(1/1) checking keys in keyring                                              [##########################################] 100%
(1/1) checking package integrity                                            [##########################################] 100%
(1/1) loading package files                                                 [##########################################] 100%
(1/1) checking for file conflicts                                           [##########################################] 100%
(1/1) checking available disk space                                         [##########################################] 100%
:: Running pre-transaction hooks...
(1/1) Creating Timeshift snapshot before upgrade...
==> skipping timeshift-autosnap due skipRsyncAutosnap in /etc/timeshift-autosnap.conf set to TRUE.
:: Processing package changes...
(1/1) reinstalling maxima                                                   [##########################################] 100%
:: Running post-transaction hooks...
(1/6) Arming ConditionNeedsUpdate...
(2/6) Updating the MIME type database...
(3/6) Refreshing PackageKit...
(4/6) Updating the info directory file...
(5/6) Updating the desktop file MIME type cache...
(6/6) SELinux: relabel installed files
error: unable to write to pipe (Broken pipe)

Consider Wayland

Security page of ArchWiki says:

Prefer using Wayland over Xorg. Xorg's design predates modern security practices and is considered insecure by many. For example, Xorg applications may record keystrokes while inactive.

If you must run Xorg, it is recommended to avoid running it as root. Within Wayland, the XWayland compatibility layer will automatically use rootless Xorg.

So it seems that in order to increase security is it worth considering to look at Wayland.

Tuning SSD: improve expected lifetime and increase performance of the whole system

There are several articles on the subject. I mention some of them:

  1. Main advice is to reduce of SSD write frequency via RAMDISK (debian, arch and "After installation — Setting up ramdisks using tmpfs" from blog post). Some quotations are followed.

    Blog:

    good paradigm is to use ramdisks for temporary files to avoid wasting writes for files we don't want to keep anyway. Looking at the UNIX Filesystem Hierarchy Standard we note that temporary files are stored at /tmp and (in practice to a lesser extent) at /var/tmp. Moving the files in these locations to RAM can now be easily achieved by mounting the corresponding paths with the filesystem type tmpfs. This will create a dynamical ramdisk, meaning that only as much RAM is used up by the disk as it requires to store the files

    If you can spare a little more RAM you should furthermore consider the use of ramdisks for locations in the file system tree where frequent writes take place, e.g. /var/log and /var/spool. You might be aware that /var/log contains the system's logfiles, meaning that by applying these settings you will loose all logs on system reboot. For servers this would surely be a problem, but in the case of laptops I don't want to loose this option of saving a lot of writes.

    Arch:

    Avoiding unnecessary access to slow storage drives is good for performance and also increasing lifetime of the devices, although on modern hardware the difference in life expectancy is usually negligible.

  2. Another but controversial common advice is to switch to low-latency IO-Scheduler:

    The default I/O scheduler queues data to minimize seeks on HDDs, which is not necessary for SSDs. Thus, use the "deadline" scheduler that just ensures bulk transactions won't slow down small transactions

    Blog post (After installation — Changing the disk scheduler) is also saying that:

    The disk scheduler controls how the kernel schedules access to storage drives. For rotating disks data access is obviously a slow process and it needs to be distributed fairly between the many processes that might want to read or write something to/from different parts of the disk. The default scheduler called cfq is optimised for the latter scenario, but not for SSDs. Performance can be greatly improved if noop or deadline are used instead. Noop is the most simple one: It basically handles requests in the order they are submitted. Deadline in some sense implements a timer on top and stops requests that take too long. Which one to use pretty much depends on the underlying hardware and the use case, but for a normal laptop and a normal use it should not make much of a difference.

    Arch wiki warns us:

    Only the CFQ scheduler supports setting IO priorities with ionice. Some background processes rely on this capability to perform background IO unobtrusively by reducing their IO priority, for example KDE's file indexer baloo. Using a different scheduler than the default CFQ scheduler can worsen the user experience on desktops.

  3. One more controversial option is adding the mount options noatime and nodiratime. See (Blog: After installation — Editing /etc/fstab, arch and debian)

    Blog:

    For each partition that is located on an SSD drive you should at least add the mount options noatime and nodiratime. These options will suppress the usual bookkeeping of access times for files and directories, respectively. Usually accessing a file implies a write as well since the current date and time will be written to the so-called inode table of the filesystem. For most applications this is not required and can therefore be safely disabled using the above options.

    Arch:

    The noatime option is known to improve performance of the filesystem.

    Debian:

    Add the "noatime" (or "relatime") mount option in /etc/fstab, to disable (or significantly reduce) disk writes whenever a file is read. Please note that since Linux kernel 2.6.30, "relatime" is the default. This improves filesystem read performance for both SSDs and HDDs.

  4. Blog: Enabling device-level write cache

    Turning on the write cache of a drive causes the drive to use some internal RAM to cache files until they are written to the disk properly. This improves performance in both SSDs as well as traditional hard drives. A disadvantage is that in the case of an abrupt power loss enabling this feature could lead to data loss. Since data is usually only kept in the disk cache for extremely short times this is hardly a problem for normal use cases.

  5. Move browsers cache (or even whole browser or other software profiles) to RAM.

    Blog:

    On a user level it is mostly the browser cache which is very predictable to cause a lot of writes. Now that /tmp is located in RAM it is a good idea to make the browser drop its cache at this location

    Move full browser profile to RAM and sync: https://wiki.archlinux.org/index.php/Profile-sync-daemon

    Since the profile(s), browser cache*, etc. are relocated into tmpfs (RAM disk), the corresponding I/O associated with using the browser is also redirected from the physical drive to RAM, thus reducing wear to the physical drive and also greatly improving browser speed and responsiveness.

    Arch#Relocate files to tmpfs provides following options:

    Relocate files, such as your browser profile, to a tmpfs file system, for improvements in application response as all the files are now stored in RAM:

    1. Refer to Profile-sync-daemon for syncing browser profiles. Certain browsers might need special attention, see e.g. Firefox on RAM.
    2. Refer to Anything-sync-daemon for syncing any specified folder.
    3. Refer to Makepkg#tmpfs for improving compile times when building packages.
    

Fonts task - wrong configurations paths?

I had error, when ansible executed fonts tasks:

TASK [fonts : Enable LCD filter] ************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "src file does not exist, use \"force=yes\" if you really want to create the link: /etc/fonts/conf.avail/11-lcdfilter-default.conf", "path": "/etc/fonts/conf.d/11-lcdfilter-default.conf", "src": "/etc/fonts/conf.avail/11-lcdfilter-default.conf"}

I changed them from /etc/fonts/ to /usr/share/fontconfig and it's seems worked. I had issues with latest arch installation image.
So is possible to fix this? Maybe somehow auto-detect this paths.

Inom-Turdikulov@a7dbc2a

Broken pipe in pacman and aur

I am using the spark repo to configure a laptop of mine.

Unfortunately I am getting these kind of messages in pacman and aur:

<192.168.1.2> (0, '', 'OpenSSH_7.6p1, OpenSSL 1.1.0f  25 May 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: 
auto- mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: 
mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: 
mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 7263\r\ndebug3: mux_client_request_session: session 
request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r
\ndebug2: Received exit status from master 0\r\n')

Then the shell halts and after 3-4 minutes it fails. The system I am trying to deploy is a clean install of the latest Manjaro O/S. Not every role it has fails - I have commended everything out in order to find why this error happens. The OpenSSH server has the default configuration, so nothing fancy here.

I am not sure this issue is of this repo's fault, just posting it there in case someone else has the same. I've seen some issues as well in ansible's repository for 2.4. Please ignore and close If that's the case.

VirtualBox not working due to missing headers

I had to add the linux-header packages to the base packages.
Was this really missing or did i made a configuration mistake?
Grepping the source from github didnt yield a result.

Fonts: Failed to install freetype2-infinality-ultimate

The output:

failed: [localhost] (item=[u'freetype2-infinality-ultimate', u'fontconfig-infinality-ultimate', u'cairo-infinality-ultimate']) => {"failed": true, "item": ["freetype2-infinality-ultimate", "fontconfig-infinality-ultimate", "cairo-infinality-ultimate"], "msg": "failed to install freetype2-infinality-ultimate"}

Possible reason:

freetype2-infinality-ultimate and freetype2 are in conflict. Remove freetype2?

light is deprecated, consider alternatives

light is orphaned at the moment:

WARNING: This project is considered orphaned since the 8th of March, 2023. Use is heavily discouraged until such a time that it is adopted by another developer.

May be it is worth consider alternatives like brightnessctl.

Moreover brightnessctl allows to control brightness of external monitor, not only laptop screen:

I'd like to configure the brightness of an external monitor

Use the ddcci-driver-linux kernel module to expose external monitor brightness controls to brightnessctl.

How to maintain your own fork of spark?

Couple of years ago I've cloned spark and have started to have my own changes in repository (because my set of software and settings couldn't be 100% same as somebodies else). From time to time I've cherry-pick`d some commits from upstream. But now I have ~2 years history of commits in upstream I want to incorporate to my fork of spark (and then somehow apply changes with ansible).

So it is more like question and not an issue. What is the best practice?

How to:

  • have simultaneously my own fork with my own changes some roles are commented out and etc
  • pull upstream changes
  • have changes (from commits) applied to my installation

sshd.socket not available anymore

Hi, I'm trying out this playbook.

I'm having issues with the ssh role. It seems that sshd.socket is not available in the openssh package anymore?

I suppose the role has to be updated with using sshd.service instead?

Best regards,
Daniel

Sound: why alsa and not pulseaudio?

Why did you decided to stick with alsa instead of pulseaudio?

I've been using pulseaudio for a long time and don't see any significant troubles (except for necessity of restarting pulse after some suspending of my laptop).

Benefits (short list): Bluetooth audio support, network audio support, per application volume control, simplified volume controls and source selection.

Grsecurity playbook assumes bootloader is grub

On my EFI-based macbook, I'm using systemd-boot (ex gummiboot) instead of grub.

I will have to figure out the steps to actually configure/load/boot the grsec kernel on my laptop.
After running spark, I have /etc/mkinitcpio.d/linux-grsec.preset (in addition to linux.preset).
Running the typical {{mkinitcpio -p linux}} only reads the linux.preset.
I probably also have to update/append to /boot/loader/arch/entries.conf to choose grsec kern from the boot menu.

Editors: failed to install gvim

In absolutely fresh system there is preinstall vim package, so installation of gvim always break the execution of playbook:

:: gvim and vim are in conflict (vim-minimal). Remove vim? [y/N]

Pipewire instead of Pulseaudio

Don't you think on moving to Pipewire?

From Arch Wiki:

PipeWire is a new low-level multimedia framework. It aims to offer capture and playback for both audio and video with minimal latency and support for PulseAudio, JACK, ALSA and GStreamer-based applications.

From Wikipedia:

PipeWire has received much praise, especially among the GNOME and Arch Linux communities. Particularly, it fixes many problems that PulseAudio had experienced, including its high CPU usage, Bluetooth connection issues, and its JACK backend issues.

From official wiki, How Is PipeWire Supposed To Be A Better PulseAudio?:

PipeWire can achieve lower latency with much less CPU usage and dropouts compared to PulseAudio. This would greatly improve video conferencing apps, like WebRTC in the browser.
PipeWire's security model can stop applications from snooping on each other's audio.
PipeWire allows more control over how applications are linked to devices and filters.
PipeWire uses an external policy manager that can provide better integration with the rest of the desktop system and configuration.

Even Pulseeffects migrated to PipeWire.

So it seems like good idea to try it.

Browsers: Destination /etc/xdg/mimeapps.list does not exist

When executing browser chooser as default application for relevant mimetypes there is an error.

/ TASK [browsers : Set rofi browser chooser as default \
\ application for relevant mimetypes]                  /
 ------------------------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

failed: [localhost] (item={u'regexp': u'^text/html=', u'line': u'text/html=browser.desktop'}) => {"failed": true, "item": {"line": "text/html=browser.desktop", "regexp": "^text/html="}, "msg": "Destination /etc/x
dg/mimeapps.list does not exist !", "rc": 257}
failed: [localhost] (item={u'regexp': u'^text/xml=', u'line': u'text/xml=browser.desktop'}) => {"failed": true, "item": {"line": "text/xml=browser.desktop", "regexp": "^text/xml="}, "msg": "Destination /etc/xdg/m
imeapps.list does not exist !", "rc": 257}
failed: [localhost] (item={u'regexp': u'^text/mml=', u'line': u'text/mml=browser.desktop'}) => {"failed": true, "item": {"line": "text/mml=browser.desktop", "regexp": "^text/mml="}, "msg": "Destination /etc/xdg/m
imeapps.list does not exist !", "rc": 257}
failed: [localhost] (item={u'regexp': u'^application/xhtml\\+xml=', u'line': u'application/xhtml+xml=browser.desktop'}) => {"failed": true, "item": {"line": "application/xhtml+xml=browser.desktop", "regexp": "^ap
plication/xhtml\\+xml="}, "msg": "Destination /etc/xdg/mimeapps.list does not exist !", "rc": 257}
failed: [localhost] (item={u'regexp': u'^application/xml=', u'line': u'application/xml=browser.desktop'}) => {"failed": true, "item": {"line": "application/xml=browser.desktop", "regexp": "^application/xml="}, "m
sg": "Destination /etc/xdg/mimeapps.list does not exist !", "rc": 257}
failed: [localhost] (item={u'regexp': u'^x-scheme-handler/http=', u'line': u'x-scheme-handler/http=browser.desktop'}) => {"failed": true, "item": {"line": "x-scheme-handler/http=browser.desktop", "regexp": "^x-sc
heme-handler/http="}, "msg": "Destination /etc/xdg/mimeapps.list does not exist !", "rc": 257}
failed: [localhost] (item={u'regexp': u'^x-scheme-handler/https=', u'line': u'x-scheme-handler/https=browser.desktop'}) => {"failed": true, "item": {"line": "x-scheme-handler/https=browser.desktop", "regexp": "^x
-scheme-handler/https="}, "msg": "Destination /etc/xdg/mimeapps.list does not exist !", "rc": 257}

Improvements to power management

I think we can start with the recommendation to run powertop --auto-tune via systemd, as shown here:
http://loicpefferkorn.net/2015/01/arch-linux-on-macbook-pro-retina-2014-with-dm-crypt-lvm-and-suspend-to-disk/
(also note he recommends powerdown.git which redirects to tlp docs).

other wants/todos:

  • warning message when battery low and a suspend when battery critical.
  • any other tlp tweaks
  • do we need to disable rfkill? (see https://wiki.archlinux.org/index.php/TLP) .. it's still enabled on my laptop

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.