Skip to main content

The viewport meta tag option interactive-widget allows us to choose what the browser should do with our page when some overlay elements — such as a phone on-screen keyboard — appear:

interactive-widget options comparison
interactive-widget options comparison. The difference between resizes-visual and overlays-content is that resizes-visual allows the user to scroll the page up to see the sticky element at the bottom, whereas overlays-content doesn't.

Chrome's default behaviour on Android, before version 108, was resizes-content — it is now resizes-visual.

For example, the Library of Babel viewport meta tag is now:

<meta
name="viewport"
content="width=device-width, initial-scale=1.0, interactive-widget=resizes-content"
/>

This allows the sticky pagination component to be always visible when the on-screen keyboard is open:

Example on the Library of Babel
The keyboard is Thumb-Key.

Without interactive-widget, scrolling down while the keyboard was open made the sticky pagination component to disappear.

Read more at developer.chrome.com/blog/viewport-resize-behavior (which is the source of the comparison image above).

In case of weird text rendering issues in Chrome looking like this:

Text issue

chanche chrome://flags/#enable-gpu-rasterization from Default to Disabled.

(It's actually happening very rarely for me, so I put back the setting to Default.)

xclip is a command line utility to get or set content in the X selection or clipboard.

It has a strange side effect when used to set things in the clipboard, it make the terminal to hang for a couple of seconds when closed, and more annoyingly it prevent Sublime Merge to terminate a command making use of it. This is probably linked to the fact that xclip starts a background process and leaves it running, as it is necessary for when the clipboard content is retrieved.

After tinkering with it, I wasn't able make it work better. However I discovered xsel, which doesn't the same things without this unwanted side effect.

Check how many inotify watches are being used:

#!/bin/bash

# Get the procs sorted by the number of inotify watches
# @author Carl-Erik Kopseng
# @latest https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers
# Discussion leading up to answer: https://unix.stackexchange.com/questions/15509/whos-consuming-my-inotify-resources
#
# If you need ultimate speed, use https://github.com/fatso83/dotfiles/commit/inotify-consumers-v1-fastest
# # Speed enhancements by Simon Matter <simon.matter@invoca.ch>
#
# A later PR introduced a significant slowdown to gain better output, but it is insignificant on most machines
# See this for details: https://github.com/fatso83/dotfiles/pull/10#issuecomment-1122374716

main(){
printf "\n%${WLEN}s %${WLEN}s\n" "INOTIFY" "INSTANCES"
printf "%${WLEN}s %${WLEN}s\n" "WATCHES" "PER "
printf "%${WLEN}s %${WLEN}s %s\n" " COUNT " "PROCESS " "PID USER COMMAND"
printf -- "------------------------------------------------------------\n"
generateData
}

usage(){
cat << EOF
Usage: $0 [--help|--limits]
-l, --limits Will print the current related limits and how to change them
-h, --help Show this help
FYI: Check out Michael Sartain's C++ take on this script. The native executable
is much faster, modern and feature rich. It can be found at
https://github.com/mikesart/inotify-info
EOF
}

limits(){
printf "\nCurrent limits\n-------------\n"
sysctl fs.inotify.max_user_instances fs.inotify.max_user_watches

cat <<- EOF
Changing settings permanently
-----------------------------
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p # re-read config
EOF
}

generateData() {
local -i PROC
local -i PID
local -i CNT
local -i INSTANCES
local -i TOT
local -i TOTINSTANCES
# read process list into cache
local PSLIST="$(ps ax -o pid,user=WIDE-COLUMN,command $COLSTRING)"
local INOTIFY="$(find /proc/[0-9]*/fdinfo -type f 2>/dev/null | xargs grep ^inotify 2>/dev/null)"
local INOTIFYCNT="$(echo "$INOTIFY" | cut -d "/" -s --output-delimiter=" " -f 3 |uniq -c | sed -e 's/:.*//')"
# unique instances per process is denoted by number of inotify FDs
local INOTIFYINSTANCES="$(echo "$INOTIFY" | cut -d "/" -s --output-delimiter=" " -f 3,5 | sed -e 's/:.*//'| uniq |awk '{print $1}' |uniq -c)"
local INOTIFYUSERINSTANCES="$(echo "$INOTIFY" | cut -d "/" -s --output-delimiter=" " -f 3,5 | sed -e 's/:.*//' | uniq |
while read PID FD; do echo $PID $FD $(grep -e "^ *${PID} " <<< "$PSLIST"|awk '{print $2}'); done | cut -d" " -f 3 | sort | uniq -c |sort -nr)"
set -e

cat <<< "$INOTIFYCNT" |
{
while read -rs CNT PROC; do # count watches of processes found
echo "${PROC},${CNT},$(echo "$INOTIFYINSTANCES" | grep " ${PROC}$" |awk '{print $1}')"
done
} |
grep -v ",0," | # remove entires without watches
sort -n -t "," -k 2,3 -r | # sort to begin with highest numbers
{ # group commands so that $TOT is visible in the printf
IFS=","
while read -rs PID CNT INSTANCES; do # show watches and corresponding process info
printf "%$(( WLEN - 2 ))d %$(( WLEN - 2 ))d %s\n" "$CNT" "$INSTANCES" "$(grep -e "^ *${PID} " <<< "$PSLIST")"
TOT=$(( TOT + CNT ))
TOTINSTANCES=$(( TOTINSTANCES + INSTANCES))
done
# These stats should be per-user as well, since inotify limits are per-user..
printf "\n%$(( WLEN - 2 ))d %s\n" "$TOT" "WATCHES TOTAL COUNT"
# the total across different users is somewhat meaningless, not printing for now.
# printf "\n%$(( WLEN - 2 ))d %s\n" "$TOTINSTANCES" "TOTAL INSTANCES COUNT"
}
echo ""
echo "INotify instances per user (e.g. limits specified by fs.inotify.max_user_instances): "
echo ""
(
echo "INSTANCES USER"
echo "----------- ------------------"
echo "$INOTIFYUSERINSTANCES"
) | column -t
echo ""
exit 0
}

# get terminal width
declare -i COLS=$(tput cols 2>/dev/null || echo 80)
declare -i WLEN=10
declare COLSTRING="--columns $(( COLS - WLEN ))" # get terminal width

if [ "$1" = "--limits" -o "$1" = "-l" ]; then
limits
exit 0
fi

if [ "$1" = "--help" -o "$1" = "-h" ]; then
usage
exit 0
fi

# added this line and moved some declarations to allow for the full display instead of a truncated version
if [ "$1" = "--full" -o "$1" = "-f" ]; then
unset COLSTRING
main
fi

if [ -n "$1" ]; then
printf "\nUnknown parameter '$1'\n" >&2
usage
exit 1
fi
main

https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers

rsync doesn't handle file/folder renames well: if a folder containing lots of big files (a photos/videos library) is renamed in the source, then the existing files in the destination will be deleted, and all files will be copied again to the destination.

Unison appears to handle file renames, but what it actually does is detecting that the files already exist in the destination, and making a copy of them in the destination, from the destination. This saves bandwidth, but is still slow and will stress hard drives for no reason.

A tool made by one single person addresses this issue perfectly: rsync-sidekick

The author mentions that the tool doesn't make any changes, but to make sure of it, we can run it in a Docker container with readonly volumes. It will output a list of commands to rename and move stuff in the destination to reproduce the renames/moves made in the source.

Usage

  • Build the container:
docker build -t rsync-sidekick .
  • Run it:
docker run --rm \
-v /<source-path>:/sync-src:ro \
-v /<destination-path>:/sync-dst:ro \
rsync-sidekick \
/bin/bash -c "rsync-sidekick -shellscript /sync-src/ /sync-dst/ && echo && cat sync_actions_*.sh"
  • Retrieve the output, check it visually, replace sync-src and sync-dst, and run it.

  • Run rsync on dry-run:

rsync -ruvin /<source-path>/ /<destination-path>/
  • Check the output; if all is ok, run rsync:
rsync -ruvi /<source-path>/ /<destination-path>/

Remote directories

rsync-sidekick only supports local directories at the moment, so to use it with a remote one, we need to mount the directory locally. Example with SSH:

mkdir ~/remote-dir
sshfs <server>:/<path-on-server> ~/remote-dir

Then you can use ~/remote-dir as a local directory.

To unmount it, run:

fusermount -u ~/remote-dir

Not really a TIL, but more a knack: stringify an object containing circular references:

const safeStringify = (obj, indent = 2) => {
let cache = [];
const retVal = JSON.stringify(
obj,
(key, value) =>
typeof value === "object" && value !== null
? cache.includes(value)
? undefined // Duplicate reference found, discard key
: cache.push(value) && value // Store value in our collection
: value,
indent,
);
cache = null;
return retVal;
};

Use like:

console.log(safeStringify(event));

From here.

Download the content of an FTP folder with wget:

wget --user=<username> --ask-password -r ftp://<url>

Although it took minutes to download 10 MB...