Archive for the 'Computers' Category

A Guide to Movie Encoding

Saturday, April 26th, 2014

This is a guide to encoding and recoding movies, mostly on Linux, and also partly a rant against the most egregious practices.

I’m talking of encoding here, but actually, just about all the sources you can get movies will already be encoded, be it DVDs, bluerays, modern cameras or files. Very rarely you will get an unencoded stream, e.g. from a VHS. So all this applies actually mostly to re-encoding.

Also, being on Linux, one of the main requirements is that all the formats are supported by open source software. I don’t care about any possible patent-violations, because those would involve software patents, and these would haven been granted illegally anyway.

The tools used and denoted by fixed font are Linux commands or Debian/Ubuntu packages; but most of the software is available on other platforms as well.

Use the source

The quality of the encoding relies most heavily on the quality of the source you have. The more artifacts — no matter where from, be it from the actual film, dust, scratches, the projector, the camera, the VHS-drive, or the more modern electronic encoding-artifacts — the bigger the encoding will get to retain the same quality as the source. Some of the worst things I’ve seen are early black and white movies with loads of dust, scratches and grain.

Basically, artifacts increase the entropy, and the more entropy the less compression is possible.

  • Use the best source available. Usually blueray, unless the producer just interpolated from a DVD, in which case adjust the resolution back down to the DVD level, usually 720 wide (but 704 or 352 is possible).
  • Codecs matter. Some are notoriously efficient at encoding artifacts, that any re-encode will actually increase the file size. DIV3 is one such.
  • Otherwise you might gain from 20% to 50% by re-encoding DIVX, XVID, DX50 with a better codec, with no loss in visible quality. And of course, with MPEG2 from DVDs you can gain around 80-90% space, and with MPEG-4 AVC or VC-1 from bluerays, around 50-80%, depending on your quality needs.
  • Generally, a 500MB file encoded from a blueray will look much better than the same 500MB file encoded from a DVD, at the same resolution. Actually, you might even get a better resolution from the blueray, at the same file size.

Acquiring target

For the target, there are basically three factors that matter in the overall quality: container, codec and encoder. Apart from resolution, of course, but there the maximum is dictated by the source.

  • Container is easy: It must support multiple audio streams, multiple subtitles, preferrably in textual format (e.g. srt or ssa), and metadata, preferrably also cover images. This Comparison of container formats makes clear this is Matroska, probably followed by MP4.
  • Codec is a bit more tricky. But basically, you want one of the newer ones, they’re offering consistently better quality at lower file size. Which about leaves H.264 and VP9. You probably want H.264, Bluerays already mostly come in it, so do youtube-videos nowadays.
  • Stop using DIV3, DIVX, XVID, DX50 right now. They’re vastly inferior compared to what modern codecs deliver in half the filesize.
  • Audio codecs don’t have a large influence on file size, But as AC3 can’t do VBR, you don’t want that, and MP3 can’t do more than 2 channels. That leaves AAC and Opus as viable options, which happen to be the defaults to go with H.264 and VP9 respectively. Don’t use AC3, and don’t use DTS, both are obsolete.
  • Fortunately, handbrake-gtk already comes with H.264 and AAC as defaults, you only need to set the container to Matroska, and you’re good. A quality factor RF of 20 is usually good; 25 is still acceptable everything more is visually bad.
  • If you’ve already got a load of MP4-files encoded with H.264 and AAC, mmg (from mkvtoolnix-gui) can rewrite the container of the file to Matroska without re-encoding. And it also supports adding more audio-tracks, subtitles and image-attachements.
  • If you want to reduce the dimensions of the movie in order to reduce filesize, don’t go below a width of 720, Actually, rather reduce the quality somewhat before reducing dimensions, the visual impact is less noticeable.
  • Don’t ever go for a “filesize of 700MB”, that’s just stupid. Nobody wants to put a movie on a CD (and actually most people wouldn’t, even 15 years ago).
  • But be careful about filesize. Sadly, there’s still VFAT filesystems out there, which can’t contain files bigger than 2GB. some of them used by todays “Smart” TVs.

Dub stands for Dumb

There is only one reason for dubbing a movie — making it available to children who haven’t learned to read yet, and to the illiterate.

  • Whoever, ever had and has the idea to voice-over instead of just leaving the original language alone and subtitle it, is a total moron. And so is everyone encoding a movie with such an audio track. However, it is acceptable to voice-over parts with foreign speakers in documentaries (but not the whole documentary!).
  • If you still want to encode a dubbed audio track, make sure to also include the original language track. If it’s not possible with your container format, you’re using the wrong one.
  • Since not everyone is expected to read every language, include all available subtitles. Again, if your container doesn’t allow that, you’re using the wrong one
  • Hardcoded subtitles (within the movie stream itself) probably means you’re either a moron or using the wrong software. It’s only acceptable if the source had them too.
  • Those pesky vobsub-files, which are actually (mpeg-)streams, can be OCR’d to textfiles (srt, ssa) with vobsub2srt. Whatever vobsub2srt cannot recognize can be OCRd with SubRip (works with wine), for instance, but it will require heavy work. So you would be better off either to get them from opensubtitles.org or just include the vobsub.
  • Subtitles that are out of sync can be fixed with subtitleeditor. If they just start late or early, you can also just set an offset within mmg (from mkvtoolnix-gui)

Finishing Touches

After having a decent file, you might want to add metadata and (if applicable) cover-images.

  • The minimum metadata you need to provide is title, year and director (yes, there are at least two movies with the same name, published the same year).
  • If the movie is a known published one, can fetch the metadata, and my nfo2xml can convert it into a Matroska meta-data xml which can be muxed in with mmg.

Scanning Books on Linux

Monday, March 24th, 2014

I’ve been scanning books for a long time, and it’s always kinda problematic with the toolchain, and with the workflow. Now I’ve pretty much found out what works, and what does not.

As a note: All the shell-code on this page assumes your files do not have spaces and other weird characters like “;” in them. If not, my bicapitalize can fix that..

Scanner

The first thing you want to have is a decent scanner, preferably one with and automatic document feeder (ADF). According to the internet, what you need would be the Fujitsu ScanSnap iX500, since it appears to be the most reliable.

However, that’s not the one I have, mine is an EPSON Perfection V500, a combined flatbed/ADF scanner, which needs iscan with the epkowa interpreter. It works, but it’s rather slow.

Scanning Software

I mostly use xsane. With the epkowa-interpreter, it gives me some rather weird choices of dpi, that’s why I mostly scan at 200×200 dpi (I would recommend 300x300dpi, but epkowa does not give me that choice, for some weird reason). Also, I scan to png usually, since this gives me the best choices later on, and is better suited to text than jpeg.

Of course, never scan black-and-white; alaways colour or greyscale. Also, don’t scan to pdf directly. Your computer can produce better pdf files than your scanner does, and also, you would need to tear the files apart anyway for postprocessing.

Get Images from PDF

If you happen to have your images already inside a pdf, you can easily extract them with pdfimages (from poppler-utils):

pdfimages -j source.pdf prefix

Usually, they will come out as (the original) jpeg files, but sometimes you will get .ppm or .pbm. In that case, just convert them, something like so:

for i in *.ppm; do convert $i `basename $i .ppm`.jpg; done

(The convert command is of course from graphicsmagick or imagemagick)

Postprocessing Images

Adjust colour levels/unsharp mask

Depending on how your scan looks, you might want to change colour-levels or unsharp mask first. For that, I’ve written some scheme-scripts for gimp:

batch-level.scm
batch-level.sh
batch-unsharp-mask.scm
batch-unsharp-mask.sh

The scheme-files belong into your ~/.gimp-2.8/scripts/ directory, the shell-scripts into your path. Yes, they’re for batch-processing images from the commandline.

Fix DPI

If the DPI is screwed, or not the same for every file, you might want to fix that too (without changing the resolution):

convert -density 300 -units PixelsPerInch source.jpg target.jpg

Tailor Scans

If your scans basically look good, as far as brightness and gamma is concerned, the thing you need is scantailor. With it, you can correct skew, artifacts at the edges, spine shadows, and even somewhat alleviate errors in brightness.

Be sure to use the same dpi in the output as in the input, as scantailor will happily blow up your output at no gain of quality. Also, don’t set the output to black-and-white, because this will most probably produce very ugly tooth-patterns everywhere.

You will end up with a load of .tif images in the out-folder; which you either can shove off to OCR directly, or produce a pdf out of it.

Don’t even try to use unpaper directly. It requires all the files converted to pnm (2MB jpeg will give 90MB pnm), and unless your scans are extremely consistent and you give it the right parameters, it will just screw up.

Create PDF from Images

We first want to convert the tif-images to jpeg, as it will be possible to insert them into a pdf file directly, without converting them to some intermediate format. Most of all, this will allow us to do it via pdfjam (from texlive-extra-utils) which will do it in seconds instead of hours.

for i in *.tif; do convert $i `basename $i .tif`.jpg; done

And then:

pdfjam --rotateoversize false --outfile target.pdf *.jpg

NEVER, ever use convert to create pdf-files directly. It will run minutes to hours, at 100% load and fill up all your memory. or your disk. And produce huge pdf-files.

Create PDF Index

Even if your PDF consists entirely of images, it might still be worthwile to add an index. You create a file like this:
[ /Title (Title)
/Author (Author)
/Keywords (keywords, comma-separated)
/CreationDate (D:19850101092842)
/ISBN (ISBN)
/Publisher (Publisher)
/DOCINFO pdfmark
[/Page 1 /View [/XYZ null null null] /Title (First page) /OUT pdfmark

And then add it to the PDF with gs:
gs -sDEVICE=pdfwrite -q -dBATCH -dNOPAUSE \
-sOutputFile=target.pdf index.info \
-f source.pdf

The upper part, the one with the metadata is entirely optional, but you really might want to add something like this. There’s some other options for adding metadata (see below).

Another option is jpdfbookmarks, however it doesn’t seem to be very comfortable either.

OCR

The end product you want with this is either a) a PDF (or EPUB) in which text is really native text and not an image of text, rendered in a system font, or b) a PDF in which the whole image is underlied with text, in a way in which each image of a character is underlied with the (hopefully correctly recognized) character itself.

Sadly, I don’t know any software on Linux which can do the latter. Unless you want to create an EPUB file, or a PDF which does not contain the original image on top of the text, you need to use some OCR software on some other platform. The trouble of course is, going all the way (no original image of the page) means your OCR needs to be perfect, as there is no way to re-OCR, or sometimes even no way to correct the text manually. And of course, the OCR software should retain the layout.

For books, doing a native text version is of course preferred, but for some things like invoices, you really do need the original image on top of the text.

Apparently, tesseract-ocr now incorporates some code to overlay images on text, but I haven’t tested that. Also, there’s seems to be some option with tesseract and hocr2pdf. But I’m not keen to try it, since ocropus, which can’t do that, has had consistently the better recognition-rate, and even that one is lower than the ones of commercial solutions.

Adding metadata to PDF files

I’ve written about this, and I’ve also written some scripts to handle this. You can do it by hand, with exiftool, or you can use my exif-meta which will do it automatically, based on the file- and directory-name, for a lot of files.

For books, unless Your name is “Windows User” and your scientific Paper is called “Microsoft Word – Untitled1” you want to at least add Author, Title, Publishing Year, Publisher. ISBN if you have one.

Needed software

On a decent operating system (Linux) with a decent package-management (Debian or derivative), you can do:

apt-get install scantailor graphicsmagick exiftool xsane poppler-utils texlive-extra-utils

to get all the packages. The rest is linked in the text.

See also

I’ve found some other related pages you might find interesting:

Life with Calibre

Tuesday, November 26th, 2013

Calibre is undisputedly the number one when it comes to e-book management. It’s HUGE. It’s got a plethora of functions.

And it’s got quirks, design decisions which may not suit to your workflow. Certainly a lot of them don’t suit to mine.

  • Calibres own space. Every document imported into the library ends up copied into some private directory of calibre, and named according to some /Author/Title/Title scheme. The way I cope with this, is import into calibre, and save-to-disk again.
  • Metadata on the filesystem Metadata is stored not within the file, but in some database, and apparently in some opf-file with the book as well. Luckily, calibre tries to put metadata into the file when saving to disk. So the solution here is the same as above.
  • Name like Yoda, A When writing files, it misnames them to some library sort order, with the article appended at the end. To fix this, there’s a parameter in “Preferences” -> “Tweaks” -> “Control Formatting of Title and Series when used in Templates”, called save_template_title_series_sorting which needs to be set to strictly_alphabetic
  • No such Character There’s a set of characters Calibre does not want in file names. They are the same on all platforms, and while it’s not wise to use asterisks and such on unix filesystems, because they would wreak havoc on shell-processing, they would still work. The only character really not allowed is the “/”. But Calibre also replaces various ballast from Windows, like desirable critters like “:” and “+”. The way to fix this is to edit
    /usr/lib/calibre/calibre/__init__.py and have them removed from _filename_sanitize_unicode.
  • Publishing by the Month Before the advent of the e-books, publishing dates are by definition expressed in years. Copyright law also uses the year only. To get rid of the ridiculous month in the publishing date, go to “Preferences” -> “Tweaks” -> “Control how dates are displayed” and set gui_pubdate_display_format to yyyy
  • Not unique As librarians know, in the absence of ISBN, books are identified by author, title, publishing year and publisher. Now when saving pdf files, Calibre neither puts in an ISBN, nor the publishing year, nor the publisher. Apparently, this is a problem of podofo, which does not know these. Speaking of which:
  • podofail Sometimes podofo also fails to write some tags. It’s not quite clear when this happens, as all my pdf files do not have any encryption, and exiftool can write metadata to them without problems.

Over time, I’ve written a slew of scripts to read and set metadata, these are:

  • epub-meta (c) — A very fast EPUB metadata-viewer based on ebook-tools’ libepub from Ely Levy.
  • epub-rename (perl) — A script to rename epub-files according to the EPUB’s metadata. Needs epub-meta and ebook-meta (from calibre).
  • exif-rename (perl) — A script to rename files according to their EXIF-tags. Tested with PDF, DJVU, M4V, OGM, MKV and MP3
  • exif-meta (perl) — A script to set EXIF/XMP-metatags according to the filename.
  • exif-info (perl) — Displays metadata and textual content of PDF files. Thought as filter for Midnight Commander

For further technical information and rants, you might want to read How to Enter EPUB Metadata Display EPUB metadata and rename EPUB files accordingly and Your name is “Windows User” and your scientific Paper is called “Microsoft Word – Untitled1″, also on this blog.

Minecraft: Semi-Automatic Farm

Thursday, October 24th, 2013

Welcome, this is my “1890 Fruit Company”, an automatic farm for Minecraft, which isn’t even about fruit. It looks rather 1890ies, though, and I couldn’t resist the name.

1890 Fruit Co.

It produces patatoes, carrots, wheat and seeds. You need to sow and plant yourself, fertilizing and harvest are pretty much automated, and the products are automatically sorted.

The schematic

The license of these files and my screenshots is the OPL 1.0 (which is about the same as CC-by-sa).

Matroshka and the State of Movie Metadata

Saturday, September 21st, 2013

I like my metadata for files within the file. The reason is simple, a filesystem is only a temporary storage for a file, and things like filenames or paths only make sense within the filesystem itself. If you move the file, the filesystem might not support your particular metadata.

Starting with the path. For instace, /movies/Pirate/ won’t exist on other peoples machines, and it actually can’t even exist on stupid windows filesystems. So the fact that the file residing within this path is probably a pirate movie would get lost. And of course, not every filesystem supports all characters or encodes them the same way, and thus the movie “Pippi Långstrump på de sju haven” might end up with a totally garbled title on a filesystem.

Since I work on the Unix shell and on the web a lot, spaces in filenames tend to get garbled (“%20”) or interfere with commandline processing. So my filenames do not have spaces or umlauts in them, they are instead BiCapitalized. In fact, I’ve written a program bicapitalize to do just that.

Enter Matroshka

When it comes to metadata, the one container format that can just about contain everything is Matroshka. MP4 would be a possibility, but it’s rather constricted in it’s use of subtitles, codecs and audio tracks or even cover images. Also, matroshka looks much less as if “designed by commitee” as MP4 does; and is generally better supported by open source software. Not quite enough, as we’ll see..

To get from, say, avi containers to mkv is easy (after apt-get install mkvtoolnix):

for i in *.avi; do mkvmerge -o `basename $i .avi`.mkv --language 1:eng --title "`respacefilter $i | cut -d . -f 1`" $i ; done

This only changes the container, it won’t recode anything. It usually works with avi, mp4, mpeg, flv, ogm and more, but not with wmv.

You’ll notice the program respacefilter, which I’ve written to display BiCapitalized filenames as strings containing spaces. And if you’ve got some experience with the unix shell, you’ll also notice the above commandline will fail for files containing spaces. That’s exactly the reason why spaces in filenames are bad.

The above command also sets the “Title” tag to something probably meaningful, and the language of the first audio track to english. You can change the latter later on with
mkvpropedit --edit track:a1 --set language=ger target.mkv

If the title is screwed, you could set it with
mkvpropedit --edit info --set title="This Movie" target.mkv

Of course, if you already do have Matroshka files, and their title tags are not set or wrong, you might not want to set all titles by hand. I’ve also written a script called titlemkv to fix this. It can also fix some drawn out all-caps acronyms. Apart from the mkvtools, this needs mediainfo (install on Debian/Ubuntu with apt-get install mediainfo).

All the above can also be done, one file at a time, with the graphical interface mmg (of course: apt-get install mkvtoolnix-gui).

By now, you should have all you movie files in Matroshka-containers, and if not, because things like wmv-files, or files containing ancient codecs can’t just be re-containered, there’s HandBrake (as usual, apt-get install handbrake-gtk)

Matroshka Metadata

Apart from title and the languages of audio-tracks and subtitles, Matroshka files do not contain any metadata directly. Instead, they are in an xml-file, which is muxed into the container. Which makes the whole process obviously rather tedious. You don’t want to do it by hand.

Also, it turns out, most application do not read any metadata from the containers AT ALL. mediainfo of course can do it. So can avinfo, surprisingly. vlc can display most of them in a special window. mpv will display the Title as the window title. But the ones really needing metadata, the media center applications CAN’T. Neither MythTV, nor xbmc. Instead, both of these rely on filenames, and put the metadata into their database, with the added option of using some accompanying file with the movie which gets interpreted as well.

To add insult to injury, given one of these accompanying files with correct data, xbmc will display it, but when trying to fill in the blanks, it will happily try to look it up — by interpreting the filename again, wrongly. At least MediaElch can do this right (and that’s why it gets linked).

So the questions are a) how do we get these “accompanying files” (assuming they’re really needed for getting metadata from the web) and b) how do we get better metadata into them, and c) how do we put this metadata into the files itself.

For this, titlemkv can produce a rudimentary .nfo file for xbmc, when given the -n switch. It will contain the title, and the year, if it is already set in the mkv. Going from this, MediaElch or any other not broken scraper, can now fill in the blanks and produce .nfo files which contain a lot of information, like directors, actors, summaries and so on.

The last piece is my nfo2xml script, which will walk over a directory and produce a mkv-compatible XML file out of every .nfo-file it finds. The XML can the be muxed into the mkv-container, thus:
for i in *.mkv; do mkvpropedit $i --tags all:`basename $i .mkv`.xml ; done

The Future

I’ll probably update titlemkv to generate complete .nfo files from mkv metadata (or split the functionality into another program), also, I want to look at the question of how to incorporate cover images and such. I want all my files to contain useful metadata, and second, as long as this sorry state persists, I want to be able to generate whatever external metadata an application wants out of the incorporated metadata (which has its own merits: I would also be able to rename and sort my whole collection solely according the metadata in the files themselves).

(Edit 1: I wrote a rather stupid shellscript mkvattachcover to convert and attach cover images. It expects them with the filenames provided by MediaElch.)

(Edit 2: For use with mediainfo --inform=file:///path/to/AVinfo.csv I put up a decent template, AVinfo.csv which will show Matroshka specific tags. No, I have no idea why they’re calling their templates .csv, they aren’t.)

But crucially, the media center applications and the file managers will need to support metadata incorporated into files; just as one expects with audio files where this is absolutely the case.

Metadata MUST reside within the same file. I do understand that certain programs do not want to incorporate code to change this metadata, but just about everything accessing these files must be able to READ it, including media players, scrapers and file managers.

(Edit 3: nautilus displays either cover.jpg or small_cover.jpg as icon. But that’s it, apparently it can’t read any other metadata.)

Patents on Bronze Age Technology

Friday, May 10th, 2013

This here is from Apple’s Slide-to-Unlock patent, which is currently being invalidated.
Slid to Unlock Patent
However, the question remains why this could be granted in the first place. Laziness? A case of “it said computer, so I turned off my brain”? Or job-blindness “I couldn’t find any prior art in the patent database”?

Because the amount of prior art is actually staggering. This here is one of the earliest I could casually find:
 Abydos King List. Temple of Seti I, Abydos
Yes, it’s hieroglyphs, and they’re from roughly 1290 B.C. The topmost hieroglyph is a “z” (or hard “s”), and the symbol is that of a door bolt. And since hieroglyphs are rather old, and Seti I. by no means one of the early pharaohs, this means there’s most probably much older evidence out there for “slide-to-unlock”.

And I’d wager there’s so much more of this crap out there. Chances are very slim that this is an isolated case, this is most probably endemic, system inherent.

Your name is “Windows User” and your scientific Paper is called “Microsoft Word – Untitled1”

Saturday, April 20th, 2013

At least that is what I get from the metadata in your publication.

Google finds about 250’000 of these papers. It gets much worse if you only search for documents called “untitled1”. Not just the documents themselves have this meta-information, but all kinds of conversions, to html, and to pdf as well.

Sometimes, to make the whole thing even more ironic, the publisher has added his own information — but neither the title, nor the author.

Yes, metadata is a kind of a pet issue for me, and I’ve even written about How to Enter EPUB Metadata, apart from also having written Software to fix metadata in PDF- and epub-files (epub-meta/epub-rename and exif-meta/exif-rename. The latter works for PDF; the name comes from exiftool, altough technically the PDF metadata is XMP).

But still, if your paper should be worth anything, it should be worth to be found, and this also means worth being provided with accurate meta-information.

Librarians either work with an ISBN, and if no ISBN can be found (because it was published before 1969, or because no ISBN was ever registered), they need the following to correctly identify a work:

  • Author
  • Title
  • Publishing Year
  • Publisher

So you should take care that at least the first three of those are correctly filled in. If you’re doing a paper or book in the course of your work or study and publish it on the internet, consider entering the university or company as publisher.

Minecraft: Mob Factory

Tuesday, December 18th, 2012

I noticed some time ago that multiple spawners could be active at the same time, as long as the player was within 16 blocks of each of them. However, if too many mobs of the same kind the spawner spawns were within 9x16x16 around the spawner, it would stop spawning after 6 mobs or so.

So in principle, it must be possible to have a lot, maybe 8, spawners, completely with their delivery- and killing-system within a sphere of 16 blocks around the player, all churning out mobs and items. So that’s where I started:

In green you can see the 16 block sphere around where the player would be standing, in yellow are the 9x16x16 areas where no mobs of the same type should be (and consequently, the area any spawned mobs need to leave as soon as possible). The cyan circle is the ground layout, and of no consequence. The spawners along with their spawn-boxes are in brown and in stone. Those structures made of end-stone are elevators and droppers, to the left is one for skeletons, to the right one for cave spiders.

This made for a rather cramped internal layout, with 7 spawners and all the mobs which needed to be lead out, upwards, thrown down, and led to the middle again. Plus the redstone, mostly for lighting. it was a mess, the spider-grinder didn’t really work, for blazes and endermen I hadn’t implemented any automatic system, and I didn’t know where to put them because of lack of space.

Then I watched Etho Plays Minecraft – Episode 234: Egg Delivery where he demonstrated with Minecraft 1.4 items will go up a solid block, if there’s no other space around where they could go. So I redesigned the whole interior. I decided that only blazes would be left to kill for XP, and the other mobs would just get killed as soon as possible, and their items sent up to the central room.

This I did. And I moved the spiders to one side, making space for another spawner, slimes, making it the whole 8 spawners I initially envisioned. Of course, if I hadn’t cared for isolating zombies, creepers and skeletons from each other, it would have been possible to put in more spawners. Probably all of them. So this isn’t as efficient as it could be.

I initially had some problems with the redstone-circuits, but I finally realised that something simpler would do the job just as well. Now it’s only tow clocks, one for the item elevator and one for the grinder, a T-flipflop, also for the grinder, and a pulser, for sweeping out items.

The two mobs posing the biggest problems were blazes and slime. blazes, because they need a light level of 11 or higher in order not to spawn (which I solved with lots of redstone lamps and a smaller spawn-area) and the slimes, which would spawn in any light. I now put half their spawn area under water if spawning is turned off, but small slimes still spawn. For the cave spiders, I just turned the above item elevator into a killing machine, killing spiders and sending up items at the same time.

Right now, I’m still not entirely happy with the blaze-situation, I would like to have them delivered to the central room, so I can kill blazes while I wait for items, but I’ve not yet found a good solution.

Finally, I couldn’t resist to give the thing a facade, and I decided upon a late 19th century industrial look. Half of it is buried in the ground, so this makes the main control room in the middle of the structure easily acessible from ground level:

I call it “The Manufacture”, although it’s of course none. But this fits with the 19th century theme, where factories sometimes still were called manufactures, although the production wasn’t really “handmade” any more. And it works day and night ;):

Level and schematic:

Update:

Mob Products

Minecraft 1.5 is out, and it makes item-handling so much easier. So this is the totally revamped mob factory, now called “Mob Products”, featuring lettering on the roof (idea and font by Etho), and using hopper conveyors, dropper elevators and an item sorter.

Also, I went over it with 1.6.2 and the HostileSpawnOverlay mod, and fixed some lighting.

Update 2:

Mob Products Front

I fixed the typo on the roof ;).

No new level but simply the schematic:

The license of these files and my screenshots is the OPL 1.0 or CC-by-sa 3.0.

Minecraft: Medieval/Baroque Town

Sunday, December 2nd, 2012

If you go to Planet Minecraft or mcschematics and look for schematics of levels of towns, you’ll notice something: Just about none of them are. They’re villages with walls and maybe a castle. Rarely, there are some that somehow make sense, like Skycrown (Nothing historically correct in this one, but plausible).

There are, however, cities that look like cites. This Imperial City is simply unbelievable.

But I didn’t want a city, but simply a fortified town for the NPCs. And I wanted a town which looked like it, and not like a village: Neither totally flat, nor with too straight roads, nor one with dispersed single-story buildings — the latter being the defining characteristic of a village. Towns need to be cramped, buildings built in rows and blocks right next to each other, with a small footprint but multiple stories high to maximize real estate.

I decided to go for a rather medieval look in general, with upper stories protruding in front of the building, sometimes with arcades in front of the houses. Also, with firewalls between the houses which are slightly higher than the roofs. Most houses feature a stall/shop behind double doors, and another entrance for entering the workshop and the living quarters.

However, the medieval looks is not drawn trough. Think of a medieval town that was modified in the later centuries, and arrived in the 18th century. Most buildings are still of older types, and only the most modern buildings really have a baroque look. In that case, it’s the city walls, which are already in the model of a star-shaped fortress of the late 17th century, including the gate-tower, the town hall with it’s tower, and the church. The church is actually the uppermost part of a cathedral by someone else, inspired by the Frauenkirche in Dresden. I won’t include it in the schematic (so I can put the town under a free license), but you’ll notice the big round place at the top of the town — just copy paste in either the cuppola of above cathedral, or something else there. And the lighthouse is actually too modern, these turned up in this form later in the 19th century.

The schematic is a cut-out from my usual map, since the town didn’t lend itself nicely to being placed on flat ground. It’s supposed to be on a hill towards the sea. If you want to set it into its original setting, the world seed is “3327780” (structures, cheats, no bonus chest, type default). You’ll find the place at +850/+600, southeast of the spawn point. There’s a village there (which in fact, was the base of my town and supplied its population).

Here’s the level and schematic:

The license of these files and my screenshots is the OPL 1.0 (which is about the same as CC-by-sa).

Minecraft: Vaubans Fortress

Tuesday, September 11th, 2012

Vauban was the foremost military engineer of the late 17th and beginning 18th century, and he built dozens of fortresses for Louis XIV. Typical for his art is the star-shaped fortress, which allows fire from one bastion to cover all space in front of a wall or another bastion.

This is my take on such a fortress. It’s loosely based on the fort of Bayonne (high-resolution pictures can be found at Wikimedia Commons: Citadelle de Bayonne), since that one is rather square than pentagonal and so lent itself to easier implementation in Minecraft. Well, easier, not exactly easy.

It measures 260 by 244 blocks. Now it also becomes clear why I needed compact designs of Minecraft Cannons. These go onto the ramparts in numbers. There should be 47 short guns (my “carronade”) and 40 long guns (my cannon mk1 and mk2) on there…

Within the citadel of Bayonne are some not-very inspiring barracks-type buildings, which I initially planned to build as well, but then I found an Isometric view of the Citadel of Bayonne which made clear that I was off anyway, and my fortress wasn’t really Bayonne. So thought what the heck, we’ll go for baroque, all the way, with garrets and an onion-domed tower. I got me some more inspiration in the form of baroque mansions and town halls, and put a mélange of them on there.

And that’s how the whole thing looks, at dusk:

Well, that’s about it, here’s the level and schematic:

The license of these files and my screenshots is the OPL 1.0 (which is about the same as CC-by-sa); the original plan of Bayonne is of course public domain.

The fortress lives best about 6 blocks up from the level ground on some slight hill, that’s why the level has it raised by this (it’s just the hill missing under it). Some small details are unfinished, namely the portal on the other side, and some kind of outer gateway fortification (you’ll notice, the modern one seen in the isometric drawing is quite different than the one on the original plan). It is about half equipped with furniture, as well as half filled with stuff, Most noteably, it’s got several huge powder magazines, The screenshots were taken on my survival map, the dowloadable level is flat.

Update: I updated the level and schematic, the fortress other entrance is now useable, has portals (automatic ones), a towerlet over one of the portals, and some gardening (hedges, fountains and lanternposts) done.