• Tutorial: stream GTK applications and use them in your browser (with GTK+ and Broadway)

    A few days ago I’ve (finally) received my first C.H.I.P.. This is the first 9 dollars microcomputer with a 1Gzh R8 ARM CPU, 512Mb of RAM, 4Gb of on-board space and it includes Wireless B/G/N and Bluetooth 4.0.

    All in all for that price, I have to admit that it’s a good, all purpose machine that you can easily use for your experiments and to learn new possibilities in computing.

    So the first thing I wanted to test is how this little machine would be able to stream GTK applications over the network using the Broadway back-end available in GTK+.

    After flashing Debian Jessie on it (that comes without and window manager), I had to compile GTK+ with the Broadway backend enabled (this is now standard in most i386 and amd64 distribution, but not in ARM ones), following the compiling instructions on the GTK+ page.

    So after logging into your C.H.I.P. you need to install the dependencies – some of them are already packaged in the right version, while you will have to compile others:

    sudo apt-get install pkg-config make autoconf2.13 libtool zlib1g-dev libffi-dev gettext libfam-dev libpackagekit-glib2-dev libgtk2.0-dev python2.7-dev gtk-doc-tools libglib2.0-dev gir1.2-glib-2.0 libtiff5-dev flex bison python-dev libcairo2-dev libepoxy-dev libatk-bridge2.0-dev vim libgirepository1.0-dev unzip

    then you will need to install GLIB:

    cd ~

    wget http://ftp.gnome.org/pub/gnome/sources/glib/2.46/glib-2.46.2.tar.xz

    tar xvfJ glib-2.46.2.tar.xz

    cd glib-2.46.2



    Now you need to find the path for giving the CFLAGS to make:

    pkg-config --cflags glib-2.0

    the path that will be shown will have to be used as the example below:

    make CFLAGS='-I/usr/include/glib-2.0 -I/usr/lib/arm-linux-gnueabihf/glib-2.0/include'

    make install

    export LD_LIBRARY_PATH="/usr/local/lib/:/usr/local/lib/pkgconfig"

    Now it’s time to compile pango, gobject-introspection, gdk-pixbuf, atk and finally GTK+

    cd ~

    wget http://ftp.gnome.org/pub/gnome/sources/pango/1.38/pango-1.38.1.tar.xz

    tar xvfJ pango-1.38.1.tar.xz

    cd pango-1.38.1




    make install

    cd ~

    wget http://ftp.gnome.org/pub/gnome/sources/gobject-introspection/1.46/gobject-introspection-1.46.0.tar.xz

    tar xvfJ gobject-introspection-1.46.0.tar.xz

    cd gobject-introspection-1.46.0



    make install

    cd ~

    wget http://ftp.gnome.org/pub/gnome/sources/gdk-pixbuf/2.32/gdk-pixbuf-2.32.3.tar.xz

    tar xvfJ gdk-pixbuf-2.32.3.tar.xz

    cd gdk-pixbuf-2.32.3



    make install

    cd ~

    wget http://ftp.gnome.org/pub/gnome/sources/atk/2.18/atk-2.18.0.tar.xz

    tar xvfJ atk-2.18.0.tar.xz

    cd atk-2.18.0



    make install

    cd ~

    wget http://ftp.gnome.org/pub/gnome/sources/gtk+/3.18/gtk+-3.18.6.tar.xz

    tar xvfJ gtk+-3.18.6.tar.xz

    cd gtk+-3.18.6

    ./autogen.sh --enable-broadway-backend --enable-x11-backend

    ./configure --enable-broadway-backend --enable-x11-backend


    make install

    The time to test the result of our creature has come:

    first of all enable the broadwayd deamon server and choose the port and screen to use:

    broadwayd -p 8080 :2 &

    export GDK_BACKEND=broadway

    export BROADWAY_DISPLAY=:2

    Finally, install a GTK application like shotwell, gedit or galculator

    sudo apt-get install gedit galculator shotwell

    and launch one of them…


    From another machine, now you can fire your browser and point to the address http://ipofyourc.h.i.p:8080

    and use your application running remotely from your browser.

    GEdit working in Chromium

  • Tutorial: How to mount raw images (.img) images on Linux

    If you have a few .img files coming as disk images from devices like floppies, CDs, DVDs, SD cards, etc, you will realize that  you cannot mount the in Linux, because they contain a file system that has to be mounted.

    In linux you would need to use the mount command as for any physical device, however you need to know the correct syntax that is based on understanding the information related to the partition(s) available in the image.

    First step is to read the partition Start point using fdisk:

    In the terminal type:

    sudo fdisk -l imgfile.img

    You will see an output similar to the one below:
    Device        boot    Start     End         Blocks      Id  System
    imgfile.img1      *             63           266544          722233           C     W95 FAT32 (LBA)
    imgfile.img2                   25679      25367890        245667890+      83    Linux

    As you can see there are two partitions, one that is FAT32 and the other one that it’s ExtFS. This means that to mount the first partition we have to tell Linux that we need to start at the sector 63. The standard sector size is 512 bytes, however there are other possibilities like 128 or 1024. Assuming that the place from where you are downloading the image doesn’t specify any sector size, we can type in the terminal:

    sudo mount -t vfat -o loop,offset=$((63 * 512)) imgfile.img /mnt/disk

    To mount the second partition, as you can imagine:

    mount -t ext4 -o loop,offset=$((25679 * 512)) imgfile.img /mnt/disk1

    It’s important to copy the “Start” sector number correctly, otherwise you’ll get an error message like:

    mount : wrong fs type, bad option, band superblock on /dev/loop,
    missing codepage or helper proggram, or other error
    In some cases useful info is found in syslog – try
    dmesg | tail or so

    One last thing, the standard sector size for CDs and DVDs is 2352 instead of 512. If you are opening such image, you’ll have to use this value instead of 512.

    Incoming search terms:

    • https://stefanoprenna com/blog/2014/09/22/tutorial-how-to-mount-raw-images-img-images-on-linux/
    • mount ext4 img file
    • img linux
    • linux mount img
    • linux mount raw img
  • Tutorial: How to use dcfldd instead of dd

    Today I want to introduce to everyone an excellent command that works very much like dd but it’s just much better…

    dcfldd is an enhanced version of dd developed by the U.S. Department of Defense Computer Forensics Lab.

     Department of Defense Cyber Crime Center

    Department of Defense Cyber Crime Center

    Features include:

    • Hashing on-the-fly, dcfldd can hash the input data as it is being transferred, helping to ensure data integrity. Supports multiple hashes at once
    • Progress bar of how much data has already been sent.
    • Flexible disk wipes, dcfldd can be used to wipe disks quickly and with a known pattern if desired
    • Verification that the image is identical to the original drive, bit-for-bit.
    • Split output, dcfldd can split output to multiple files with more configurability than the split command
    • Piped output and logs, dcfldd can send all its log data and output to commands as well as files natively
    • Verify capability

    How to install in Ubuntu:

    sudo apt-get install dcfldd

    Here you can see a small summary of the most common commands:

    if = Input File (device or file you want to read)
    of = Output File (device or file you want to copy the data to)
    hash = md5, sha1, sha256, sha384 or sha512 (hash type)
    hashwindow= Size (in Bytes), about how often a hash calculation will happen
    <hash>log = file that will contain the hash calculations log for each hash type (eg: sha1log=sha1.log)
    hashconv = valid values: AFTER or BEFORE. It depends if you want to perform the hash after or before the conversion
    bs = Byte Size (amount of bytes to read at once)
    noerror (ignore read errors and continue) , sync (performs padding) are the 2 most common options here
    split = breaks image file into multiple files
    splitformat = the file extension format for split operation
    conv = convert the file as per the comma separated keyword list (see following list):
    ascii=from EBCDIC to ASCII
    ebcdic=from ASCII to EBCDIC
    ibm=from ASCII to alternated EBCDIC
    block=pad newline-terminated records with spaces to cbs-size
    unblock=replace trailing spaces in cbs-size records with newline
    lcase=change upper case to lower case
    notrunc=do not truncate the output file
    ucase=change lower case to upper case
    swab=swap every pair of input bytes
    noerror=continue after read errors
    sync=pad every input block with NULs to ibs-size; when used with block or unblock, pad with spaces rather than NULs


    dcfldd if=/dev/source hash=md5,sha512 hashwindow=1G md5log=md5.txt sha512log=sha512.txt \
    hashconv=after bs=512 conv=noerror,sync split=1G splitformat=aa of=image.dd

    This command will read one GB from the source drive and write that to a file called image.dd.aa. It will also calculate the MD5 hash and the sha512 hash of each Gigabyte read.

    It will then read the next GB and name that image.dd.ab. The md5 hashes will be stored in a file called md5.txt and the sha512 hashes will be stored in a file called sha512.txt. The block size for transferring has been set to 512 bytes, and in the event of read errors, dcfldd will write zeros.

    Incoming search terms:

    • dcfldd
    • dcfldd example