SHELL=bash -o pipefail PATH:=$(CURDIR)/bin:$(PATH) export PATH url2murl = $(subst %,^25,$(subst :,^3A,$(subst ^,^5E,$1))) murl2url = $(subst ^5E,^,$(subst ^3A,:,$(subst ^25,%,$1))) dirfail = ( r=$$?; mv -- '$@'{,.bak}; exit $$r; ) all: $(MAKE) dat/urlkeys.mk $(MAKE) dat/index.mk $(MAKE) dat/pools dat/git fix: grep -rl '

503' dat | xargs rm -fv -- .PHONY: all fix # Stage 1 ###################################################################### dat: mkdir -p $@ dat/cdxindex.txt: | dat cdxget 'url=www.unicode.org/Public/*' 'fl=urlkey' 'filter=statuscode:200' 'filter=urlkey:.*(cvt|convert)utf.*' > $@ dat/urlkeys.txt: dat/cdxindex.txt < $< cut -d '?' -f1 | sort -u > $@ dat/urlkeys.mk: dat/urlkeys.txt < $< sed 's/^/urlkeys+=/' > $@ # Stage 2 ###################################################################### ifneq ($(wildcard dat/urlkeys.mk),) include dat/urlkeys.mk dat/each-cdx/%.txt: @mkdir -p '$(@D)' cdxget "url=$$(urlkey2url '$*')" 'filter=statuscode:200' 'fl=timestamp,original' > '$@' dat/index.txt: $(addprefix dat/each-cdx/,$(addsuffix .txt,$(urlkeys))) dat/urlkeys.txt cat -- $(foreach c,$(filter dat/each-cdx/%,$^),'$c') | sort > $@ dat/index.mk: dat/index.txt < $< sed -e 's,^,index+=,' -e 's, ,/,' > $@ # Stage 3 ###################################################################### ifneq ($(wildcard dat/index.mk),) -include dat/index.mk dat/content-dir/%/index.wahtml: @mkdir -p '$(@D)' curl -sL 'http://web.archive.org/web/$(call murl2url,$*)' > $@ dat/content-dir/%/index.html: dat/content-dir/%/index.wahtml < $< wayfore > $@ dat/content-dir/%/readme.txt: dat/content-dir/%/index.html < $< sed -n '/^
$$/,/<\/pre>/p' | sed -e 1d -e 's,
.*,,' > $@ dat/content-dir/%/metadata.txt: dat/content-dir/%/index.html < $< grep '^]*>//g' | grep -vi 'parent directory' | fmt-metadata $(firstword $(subst /, ,$*)) > $@ content-dir = $(foreach u,$(filter %/,$(index)),dat/content-dir/$(call url2murl,$(u))) download += $(addsuffix readme.txt,$(content-dir)) $(addsuffix metadata.txt,$(content-dir)) dat/content-file/%: @mkdir -p '$(@D)' curl -sL 'http://web.archive.org/web/$(call murl2url,$*)' > $@ content-file = $(foreach u,$(filter-out %/,$(index)),dat/content-file/$(call url2murl,$(u))) download += $(content-file) download: $(download) .PHONY: download dat/pools.txt: $(addsuffix metadata.txt,$(content-dir)) dat/index.txt grep ^ $(foreach c,$(filter %/metadata.txt,$^),'$c') | sed -r -e 's,^dat/content-dir/,,' -e 's,/.*/Public/, ,' -e 's,/metadata\.txt:,/,' -e 's,\s+, ,g' | sort -u > $@ dat/pools: $(download) dat/pools.txt dat/index.txt rm -rf -- $@ $@.bak poolify dat/pools.txt dat/index.txt || $(dirfail) dat/git: $(download) dat/index.txt rm -rf -- $@ $@.bak gitthing dat/git < dat/index.txt || $(dirfail) ################################################################################ endif endif .DELETE_ON_ERROR: .SECONDARY: