Install Arduino on Raspberry PI

  1. Download arduino Linux ARM 64 bit

  2. Untar

    sudo tar xvJf ~/Downloads/arduino-1.8.9-linux64.tar.xz -C /opt
    
  3. Install

    sudo -E /opt/arduino-1.8.9/install.sh
    

Now, you have to add the Debian 10 login user to the dialout, tty, uucp and plugdev group. Otherwise, you won’t be able to upload your Arduino code to the Arduino microcontroller.

sudo usermod -aG dialout $(whoami)
sudo usermod -aG tty $(whoami)
sudo usermod -aG uucp $(whoami)
sudo usermod -aG plugdev $(whoami)
sudo reboot

#Raspberry#iot#2019

mDNS - Bonjour service for IOT

mDNS is installed by default on most operating systems or is available as separate package. On Mac OS it is installed by default and is called Bonjour. Apple releases an installer for Windows that can be found on Apple’s support page. On Linux, mDNS is provided by avahi and is usually installed by default.

#Bonjour#iot#2019

Subreddit Image Downloader

Subreddit downoader is a bash script which:

  • download all images in full size
  • download only new images
  • support paging
  • is MacOS/Linux/Windows compatible
#!/usr/bin/env bash

###############################################################################
# Config
###############################################################################

# default subreddit=catpictures :)
subreddit=${1-catpictures} && json=${subreddit}

# default dir=<subreddit name>
dir=`realpath ${2-${subreddit}}` && mkdir -p ${dir}

# default page=1
pages=${3-1} 

###############################################################################
# Downloading images
###############################################################################

printf "Download all subreddit \e[1;31m/r/${subreddit}\e[m images to \e[1;31m${dir}\e[m\n"

for i in $(seq ${pages});
do

    # download subreddit json file
    curl -sS "https://www.reddit.com/r/${subreddit}.json?limit=100&after=${after}" -A 'random' | json_pp -json_opt utf8,pretty > ${dir}/${json}.json

    printf "\e[1;35mProcessing data from ${dir}/${json}.json\e[m\n"

    images=$(cat ${dir}/${json}.json | jq -r ".data.children[].data.preview.images[0].source.url" | egrep '\.jpg|\.png|\.gif' )

    # download all images from file
    for img in ${images}
    do
        # getting filename from URL
        file=${img##*/} && file=${file%\?*}

        # Download only new images
        if [[ ! -f "${dir}/${file}" ]]; then
            printf "\e[1;90m- ${file}\e[m\n"
            curl -sS -A 'random' "${img//&amp;/&}" -o ${dir}/${file} &
        fi
    done;

    # go to next page
    after=$(cat ${dir}/${json}.json | jq -r ".data.after") && json=${after}

    if [[ ${after} == "" ]]; then
        break # not have any after pages
    fi

done;

###############################################################################
# Cleanup
###############################################################################

rm ${dir}/*.json

wait #wait for all background jobs to terminate

Usage

./subreddit-download.sh <subreddit name> <directory> <pages>

Download all images from catpictures subreddit:

./subreddit-download.sh catpictures ./catpictures 5

Requirements

Source code: https://github.com/OzzyCzech/subreddit-image-downloader

#bash#reddit#curl#2019

Fix casks with depends_on that reference pre-Mavericks

If you get an error of the type **Error: Cask 'xxx' definition is invalid: invalid 'depends_on macos' value: ":mountain_lion", where hex-fiend-beta can be any cask name, and :mountain_lion any macOS release name, run the following command:

/usr/bin/find "$(brew --prefix)/Caskroom/"*'/.metadata' -type f -name '*.rb' -print0 \
| /usr/bin/xargs -0 /usr/bin/perl -i -pe 's/depends_on macos: \[.*?\]//gsm;s/depends_on macos: .*//g'

#macOS#brew#Catalina#2019

Backup full git repository as single bundle file

Git is capable of "bundling" its data into a single file. The bundle command will package up everything that would normally be pushed over the wire with a git push command into a binary file that you can email to someone or put on a flash drive, then unbundle into another repository.

Following bash function will clone repository and create one signle bundle file with nice name:

#!/bin/bash

function git_backup() {	
    target=$(echo ${1#*:} | tr / _)		
    git clone --mirror $1 ${target} && cd ${target}
    git bundle create ${2-../}/${target%%.git}.bundle --all
    cd - && rm -rf ${target}
}

Usage:

git_backup git@github.com:OzzyCzech/dotfiles.git ~/Downloads/

PS: Note that git bundle only copies commits that lead to some reference (branch or tag) in the repository. So tangling commits are not stored to the bundle.

You can also create nice alias in .gitconfig file:

[alias]
  backup="!gb() { target=$(echo ${1#*:} | tr / _); git clone --mirror $1 ${target} && cd ${target}; git bundle create ${2-../}/${target%%.git}.bundle --all; cd - && rm -rf ${target}; }; gb"

For more informtion view https://github.com/OzzyCzech/dotfiles

Backup whole GitHub account

You can use GitHub API to get list of all user repos. Then you have to apply all your bash magic power to getting right names from that.

curl -s https://api.github.com/users/OzzyCzech/repos | json_pp | grep full_name | cut -d\" -f4

Or there are a number of tools specifically designed for the purpose of manipulating JSON from the command line. One of the best seems to me jq

for repo in $(curl -s https://api.github.com/users/OzzyCzech/repos | jq -r ".[].ssh_url")
do  
  git backup $repo /Volumes/Backup/git
done;

Restore

You can difectly clone repository from bundle file:

git clone my-super-file.bundle directory

#bash#git#2019

How to backup iCloud drive with rclone

All iCloud drive data are located in ~/Library/Mobile\ Documents/com~apple~CloudDocs/ folder. You can easily sync them with rclone to backup hard drive connected to Mac. In my case it will be mounted to /Volumes/Backup/.

rclone sync ~/Library/Mobile\ Documents/com~apple~CloudDocs/ /Volumes/Backup/iCloudDriveBackup --copy-links

If you will need also backup of all deleted files (sync usually remove files that was delete from source) is there --backup-dir parameter.

rclone sync ~/Library/Mobile\ Documents/com~apple~CloudDocs/ /Volumes/Backup/iCloudDriveBackup --copy-links 
       --backup-dir="/Volumes/Backup/iCloudDriveArchive/$(date +%Y)/$(date +%F_%T)"

#iCloud#backup#macOS#2019

Download all images from URL into a single folder

There is plenty options, but easiest one is use command line. The wget is command line utility allows you to download whole web pages, files and images from the specific URL.

Follow command works just fine:

wget -nd -nc -np \
     -e robots=off \
     --recursive -p \
     --level=1 \
     --accept jpg,jpeg,png,gif \
     
     [example.website.com]

What's mean all that?

  • -nd, --no-directories: Do not create a hierarchy of directories when retrieving recursively.
  • -nc, --no-clobber: Do not overwrite existing files.
  • -np, --no-parent: Do not ever ascend to the parent directory when retrieving recursively.
  • -e robots=off: execute command robots=off as if it was part of .wgetrc file. This turns off the robot exclusion which means you ignore robots.txt and the robot meta tags (you should know the implications this comes with, take care).
  • -r, --recursive: Turn on recursive retrieving
  • -p, --page-requisites: Download all the files that are necessary.
  • -l depth, --level=depth: Specify recursion maximum depth level.
  • -A, --accept: Accepted file extensions.

Other useful download options:

  • -H: span hosts (wget doesn't download files from different domains or subdomains by default)
  • --random-wait: This option causes the time between requests to vary between 0.5 and 1.5
  • --wait 1.0: Wait the specified number of seconds between the retrievals.
  • --limit-rate=amount: Limit the download speed to amount bytes per second
  • -U "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36": Identify as agent-string to the HTTP server as Mozilla Firefox from Windows

Read more on wget manual page.

Real world example

Download all Homophones, Weakly images since 2011

wget -nd -nc -np \
     -e robots=off \
     --recursive -p \
     --level=1 \
     --accept jpg,jpeg \
     -H --random-wait \
     -U "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" \
     http://homophonesweakly.blogspot.com/{2011..2019}

#bash#wget#2019

Polly.js - Record, replay, and stub HTTP interactions.

Record, replay, and stub HTTP interactions.
Record, replay, and stub HTTP interactions.

Polly.JS is a standalone, framework-agnostic JavaScript library that enables recording, replaying, and stubbing of HTTP interactions. By tapping into multiple request APIs across both Node & the browser, Polly.JS is able to mock requests and responses with little to no configuration while giving you the ability to take full control of each request with a simple, powerful, and intuitive API.

#javascript#testing#2019