2010/11/28

Triggering Hudson parameterized builds which include a file parameter using curl

My final goal is to run mvn release:prepare on my local machine to have greater control and trigger the actual mvn release:perform on Hudson, which will do the deployment in a controlled environment and archive the build log.

To achieve this, I want to upload the generated release.properties to the Hudson instance. This should be possible using the Parameterized Trigger Plugin. The tricky part is that the proposed solution on the plugin's Wikipage using buildWithParameters seems not to work, I always got HTTP/400 or HTTP/500 responses. After some tries and an analyze of the traffic with Charles I came up with two solutions, one using token authentication, the other one basic authentication. The important part seems to be to include the parameters json and Submit as well. Note that the file parameter is numbered, i.e. it is called file0.

Token authentication

curl -i -Fname=release.properties -Ffile0=@FILE_TO_UPLOAD \
-Fjson='{"parameter": {"name": "release.properties", "file": "file0"}}' \
-FSubmit=Build \
'http://HUDSON/hudson/job/JOBNAME/build?token=TOKEN'

The advantage of this is that no further user interaction is required, however you do not know who triggered the build.

Basic HTTP authentication

curl -i -uUSERNAME -Fname=release.properties -Ffile0=@FILE_TO_UPLOAD \
-Fjson='{"parameter": {"name": "release.properties", "file": "file0"}}' \
-FSubmit=Build  'http://HUDSON/hudson/job/JOBNAME/build'

The advantage of this is you know who triggered the build.

2010/11/12

Using iptables on Android to redirect HTTP connections to a running Charles proxy instance

During development it is often desirable to inspect the HTTP requests from your applications. As reported in Android Issue 1273 there is no easy way to set a HTTP proxy when using WIFI. In this article I describe how to use Charles as a Webproxy at least for unencrypted connections.

Unfortunately, you have to root your telephone, as otherwise you are not allowed to call iptables. Rooting is easy to do, visit unrevoked and follow the instructions. If you want to install a custom rom with Froyo just follow the instructions on Wildpuzzle (or any other) ROM for HTC Wildfire.

Then install Charles, see my article on Using BaseX and Charles. Start it up and configure Charles to be a transparent HTTP proxy in Proxy/Proxy Settings....

I assume you installed the Android SDK (for Mac OS X use Homebrew, see my article on starting an Android emulator via LaunchAgent for specifics).

On your device allow USB Debugging (Settings/Applications/Development/USB Debugging). Now connect your rooted device via USB. Enter adb shell, you should be greeted with a sh-3.2 prompt. In this example 192.168.51.9 is the address of the computer running Charles, 8888 is the port.

sh-3.2# iptables -t nat -A OUTPUT -p tcp -o eth0 --dport 80 -j DNAT --to 192.168.51.9:8888
FIX ME! implement getprotobyname() bionic/libc/bionic/stubs.c:378

You may ignore the error.

sh-3.2# iptables -t nat -L -nvx
Chain PREROUTING (policy ACCEPT 19 packets, 4832 bytes)
    pkts      bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 1068 packets, 65421 bytes)
    pkts      bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 1050 packets, 63721 bytes)
    pkts      bytes target     prot opt in     out     source               destination         
       8      472 DNAT       tcp  --  *      eth0    0.0.0.0/0            0.0.0.0/0           tcp dpt:80 to:192.168.51.9:8888 

Hint: On Mac OS X you have to allow incoming connections to your computer e.g. by going to System Settings/Security and disabling the firewall. Now you should see all your unencrypted HTTP connections going through Charles.

To disable using Charles as a proxy enter:

sh-3.2# iptables -t nat -F OUTPUT

This will reset the routing again and all HTTP connections will go directly to the hosts again.

Unfortunately this approach will not work for encrypted connections right now, I am still investigating this.

2010/10/15

Using BaseX to grep through Charles output

BaseX is a fantastic tool to grep through large XML files by creating indices for text, attributes and path summaries. I use this to analyze data generated by Charles, an HTTP proxy / HTTP monitor / Reverse Proxy. Both tools are written in Java, so you should have no problems running them. While the latter is not OSS, it comes at a reasonable price and may be used in a trial version for 30 days after which you get a nagging dialog.

Charles may be used as a simple tool to run stress tests. Just choose it as a proxy, run your usual usecases and export the data to xml. Two of the power features Charles offers are Man in the middle for SSL connections by importing the Charles Root CA certificate and modifying your requests on the fly to use test systems of new software instead of the live ones. Afterwards you may check the output by grepping your expected results using BaseX using XQuery or XPath. An example:

  • Start Firefox creating a new profile /Applications/Firefox.app/Contents/MacOS/firefox-bin -profileManager called Charles.
  • Download and install Charles' Firefox extension by visiting the download site and restart Firefox after installation.
  • In the Tools menu of Firefox Charles offers to install the CA certificate.
  • Make sure you have Charles running and choose to proxy Firefox in it's Proxy menu.
  • Enable Charles in the Tools menu of Firefox, now you should see requests coming through Charles.
  • Search for hgkit in Google.
  • Drill down in Charles tree view and find the http://www.google.de/search?client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&channel=s&hl=de&source=hp&q=hgkit&meta=&btnG=Google-Suche request.
  • From the context menu of this request choose Repeat advanced and enter 100 iterations with 5 concurrent requests making sure to use a new session.
  • Export the new session in Charles as google-hgkit.xml.
  • Now start BaseX and create a new database referencing google-hgkit.xml. If you encounter an error Invalid byte 1 of 1-byte UTF-8 sequence make sure to use the built in parser in the Parsing tab. Make sure you chose Options/Realtime execution.
  • Analyze your data:
    • A search for /charles-session/transaction should result in 100 hits.
    • A search for /charles-session/transaction/response[@status="200"] should result in 100 hits.
    • As the html returned by the search is escaped in the body, you need to use XML escaping in your search through the body.
    • A search for /charles-session/transaction/response[@status="200"]/body[contains(text(), "this_surely_will_not_show_up_will_it_dsddada")] should result in 0 hits.
    • A search for /charles-session/transaction/response[@status="200"]/body[contains(text(), "<a href="http://hgkit.berlios.de/")] should result in 100 hits.
    • The Xquery for $y in (for $x in /charles-session/transaction where $x/response/@status="200" return ( $x/@endTimeMillis - $x/@startTimeMillis)) order by $y descending return $y will return the times for successful requests in milliseconds in descending order.
    • Return all requests taking more than 300 milliseconds: for $y in (for $x in /charles-session/transaction where $x/response/@status="200" return ( $x/@endTimeMillis - $x/@startTimeMillis)) where $y > 300 order by $y descending return $y.
    • Return the count for the above requests:
      let $times := (for $x in /charles-session/transaction 
        where $x/response/@status="200" 
        return ($x/@endTimeMillis - $x/@startTimeMillis))
      let $slowQueries := for $y in ($times) where $y > 300 return $y
      return count($slowQueries)
      

You could try to trigger a second search with a different searchterm and analyze that the search results are not mixed up by querying, e.g.
/charles-session/transaction[contains(@query, "q=hgkit")]/response[@status="200"]/body[contains(text(), "<a href="http://hgkit.berlios.de/")]. You may select more than one request in Charles for repetition. For further instructions on XPath I recommend w3school's tutorial.

2010/09/27

Starting an Android emulator automatically on MacOSX after Login via LaunchAgent

Homebrew offers a simple means to install additional software packages on your MacOSX computer. After initial installation of brew as admin user execute:

brew install android-sdk # will install the newest SDK starter package
android update sdk # this will open the UI, now install all platforms
chgrp -R staff /usr/local/Cellar/android-sdk/r7 # otherwise the ANDROID_HOME will be owned by the wheel group and you may not start anything as non admin user.

To use tools like the emulator add ANDROID_HOME and ANDROID_SDK_ROOT to your $HOME/.profile or $HOME/.bash_profile (if the latter exists, use this):

ANDROID_SDK_ROOT=/usr/local/Cellar/android-sdk/r7
ANDROID_HOME=$ANDROID_SDK_ROOT
export ANDROID_SDK_ROOT ANDROID_HOME

Create an emulator called Wildfire using the android command. Now if you want the emulator to be started automatically after you login, put the following into $HOME/Library/LaunchAgents/emulator-wildfire.plist:

After you saved the file, execute launchctl load $HOME/Library/LaunchAgents/emulator-wildfire.plist. From now on the emulator starts whenever you (or your CI user) logs in.

2010/09/05

A simple way to get a git hash as version info into Android applications using Maven

I recently decided to do some Android programming. Enters Mittagstisch KA. I really like to know which sources applications are built from. Using Maven and it's Antrun-Plugin this is rather simple:

This will create a new string resource file, which is automatically picked up by Android's resource compiler and might be read in your application by an Activity like this:


final String gitHash = getResources().getString(R.string.info_githash);

Do not forget to add res/values/githash.xml to your .gitignore file otherwise you will be committing infinitely :-).

2010/07/06

Really using launchctl to restart a Hudson Mac OS X build slave connected via JNLP automatically

In my last posting I wrote commands put into $HOME/.launchd.conf would be launched automatically after a login as stated by the man page for launchtctl. However this is false! After a reboot or relogin the commands will not be picked up! Stating man 5 launchd.conf:
$HOME/.launchd.conf  Your launchd configuration file (currently unsupported).
Pulling my ear: always try and test what you write about, sorry :-(. However using the following .plist file put into
$HOME/Library/LaunchAgents/org.hudson-ci.jnlpslave.plist really starts the slave:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
        <key>KeepAlive</key>
        <true/>
        <key>Label</key>
        <string>org.hudson-ci.jnlpslave</string>
        <key>ProgramArguments</key>
        <array>
                <string>/usr/bin/java</string>
                <string>-jar</string>
                <string>/home/hudson/bin/slave.jar</string>
                <string>-jnlpUrl</string>
                <string>http://SERVER/hudson/computer/NODE/slave-agent.jnlp</string>
        </array>
        <key>RunAtLoad</key>
        <true/>
</dict>
</plist>
Of course you have to adapt the path to your slave.jar as well as the URL to your Hudson-master.

2010/07/02

Using launchctl to restart a Hudson Mac OS X build slave connected via JNLP automatically

$HOME/launchd.conf does not work, see my working followup on this!

In my company's build infrastructure most of the slaves are located in the same data centre as the master. So we usually just use ssh to launch the slave.jar. As we did not want to buy XServe and our operations team would not like to host such an aberration from the usual (Linux) to build the handful of jobs which are Mac OS X only, we bought a Mac-Mini, which is part of the workstation LAN, where hudson will login automatically as we need the GUI anyway for Selenium tests.

As we have very strict firewall rules, access from the server LAN into the workstation LAN is forbidden. That's why we use JNLP to start the slave. So we've usually restarted the slave manually after the connection broke down.

Enters launchctl. Instead of fiddling around with a plist file I just used launchctl submit to achieve the same. From the commandline enter the following command:

launchctl submit -l hudson-slave -- /usr/bin/java -jar /Users/hudson/slave.jar -jnlpUrl http://SERVER:PORT/hudson/computer/NODE/slave-agent.jnlp

This will start the slave and restart it automatically if the connection ever should break down. You may watch the logging statements uttered by the slave by executing
open /Applications/Utilities/Console.app. To enable this command every time your Mac OS X machine reboots, create a .launchd.conf in the hudson user's HOME like this:


cat > /User/hudson/.launchd.conf << EOF
submit -l hudson-slave -- /usr/bin/java -jar /Users/hudson/slave.jar -jnlpUrl http://SERVER:PORT/hudson/computer/NODE/slave-agent.jnlp
EOF

You must not use the javaws way (the -wait option did not work for me), as the parent process will exit after it launched the jnlp connection and launchd will try to restart it again immediately for some times.

2010/06/27

Cross browser CSS and selectors - improving Hudson's viewList

After visiting the JBOSS Hudson instance with Firefox I really liked the way how the tabs were shown in the viewList. However revisiting the same page with Chrome was a disappointment. Neither was the active view emphasized nor were the inactive views flowing like they did with Firefox.

After some trials I detected that the attribute selectors in the CSS were not triggered. Digging into element view showed that Chrome did not render a whitespace between the height attribute, so Firefox rendered tr[style='height: 3px;'] while Chrome was rendering tr[style='height:3px;']. After duplicating the selectors and changing some attributes for Chrome I got at least the active view rendered in the right way, see jboss-style.css.

2010/06/18

Storing your OpenOffice, Xmind, ... zippy documents more efficiently in a SCM

I really like having my sourcecode and documents in a SCM since I first discovered CVS about 14 years back and introduced it in two companies thereafter, one of which had tried to use VSS (not really usable at the time, you had to lock files for editing which made you call for the VSS admin when your colleague was not available and did not allow to work on the same document at all), while in the other developers only had been using timestamped ZIP files before, which made team work really hard. In my current company I (maybe) made a mistake by pushing the switch from CVS to SVN about five years ago.
Back then I took a look at one of the first DVCS systems (arch) but found it to be to confusing (at least for me, YMMV). About three years ago I discovered Mercurial and really have liked it since, especially as I really like Python. I tried Bazaar as well because it promised better integration with Subversion but it used several different, incompatible repository formats so I had problems even checking out a remote repository more than once and the speed was not at all convincing as well. Nowadays I use Git sometimes which I like as well and I am especially impressed by the simple underlaying concept of storing things. However I still feel more comfortable with Mercurial right now and use Bitbucket a lot.
After having used DVCS you feel almost crippled by SVNs bad merging support and the idea of having no distinction between branches and tags seems not so clever anymore, we have had some hard times using standard SVN tools after a decision to put release tags in a directory called releases and are sometimes still struggling to find a common point of view on the correct position of trunk and what to store beneath release tags in repositories used by more than one project, so they are unambiguous both for our tooling chain and understandable for humans.
Well, back to the topic: nowadays a lot of software uses ZIP containers to store their information, which will bloat your SCMs because every new zip is so different from it's ancestor, even if you did only include a single new word, because the compression and a preview picture will make the new version very different from the old one. So I wrote a little Python script which will uncompress, delete the included preview and put the remaining files back into an uncompressed ZIP again using the stored method.

Triggering Hudson builds with Mercurial hooks - a variation

Ashlux writes about triggering Hudson builds with Mercurial hooks on his blog. The basic hook described is:

[hooks]
changegroup.hudson = curl http://hudson_url/job/project_name/build?delay=0sec

I use this technique as well a lot with two refinements:

  1. I use polling instead of build. So the url is http://hudson_url/job/project_name/polling. This will poll your Mercurial repository and only if something really changed, the build will be triggered.
  2. Instead of setting up authentication I always use the TOKEN approach described on Remote access API, so the url gets http://hudson_url/job/project_name/polling?token=TOKEN

Using tokens you do not need to submit any authentication information. Bitbucket offers a POST service which you may use instead of the aforementioned hook. Github offers a similar service.

2010/05/29

Fun with Javascript for Hudson

Right now, I do not have a lot of experience with Javascript but after having seen some funny stuff done to our internal Bugzilla with a userscript (templates for bugreports patched into the comment field), I decided to give it a try. Sometimes jobs in our internal internal build servers (Hudson) will break over night because of full hard drives e.g. So we created a special view with all failed builds and a Hudson extension to build all jobs in a view.

Now I thought this could be easily accomplished with Javascript as well, so I read a little bit about YUI2, which is included in Hudson already and came out with my first Javascript extension, which I host on Bitbucket.
The source code:

  • I just save the script in the userContent folder of Hudson and source it in the description of the view like this:
    <script src="/hudson/userContent/hudson_extensions/he-all.js" type="text/javascript"></script>.
  • First, I attach the initialization with the help of the YAHOO.util.Event.onDOMReady to the point in time when the whole page is rendered, on my first tries I only collected the links already rendered up to the point where the script was included.
  • If any job build links are found (isBuildHref), I render a Build All link.
  • When hitting the link, I just iterate over the links (buildAll) and send an asyncRequest discarding the result by not specifying a callback function at all to trigger the actual builds.
Now I already can think of other useful stuff to do with this :-).

2010/05/22

Command-Tab - Blog Archive - How to Test RAM Under Mac OS X

After updating my Macbook from 2GBG to 4GB I ran Command-Tab - Blog Archive - How to Test RAM Under Mac OS X five times, as the the two blocks I have got first showed artifacts on the screen, now everything is running happily :-),

2010/05/08

Upgrading a 1&1 Dynamic Cloud Server from Ubuntu 8.04 LTS to 10.04 LTS

My first try following the instructions for Lucid was not successful, after shutdown -r now the Dynamic Cloud Server did not start up again. It is a pity that there is no serial console nor a Rescue or Repair-mode, so I gave it another try...

After loads of retries (new initialisation, waiting and another execution of do-release-upgrade --devel-release) I found the solution in the end:

  • While do-release-upgrade --devel-release updated dhclient I took the old configuration with static-route, see dhclient: classless static route, bug? as well.
  • As I had no second chance to boot without security, I removed apparmor (apt-get remove apparmor), maybe I will install this again after some research.
  • Most important: New Linux-Kernels include a new IDE-driver, which adresses former hda devices as sda, so I changed three things after the package update but before reboot:
    • In /boot/grub/menu.lst I replaced # kopt=root=/dev/hda1 ro console=tty0 console=ttyS0,57600 with # kopt=root=/dev/sda1 ro console=tty0 console=ttyS0,57600.
    • After that I called update-grub to update the configuration of grub.
    • In /etc/fstab I replaced all occurrences of hda with sda.

Now the server is up and running again :-)

What's this all about?

After having a german blog for some time I now decided to allow myself a second blog, where I will post about technical matters.