Archive for category Programming

Inspiration projects for next-generation Torrus

It’s not yet clear when I can start working on a new-generation Torrus, but here are some nice software projects which would probably inspire the new design, or probably be part of the new design. I haven’t looked into them in depth though.

  • Bosun is a distributed monitoring system produced by StackExchange. It uses distributed collector agents which write data into OpenTSDB. Bosun and its collector are written in Go, and OpenTSDB is written in Java.
  • InfluxDB is a time-series database written in Go.

and yes, the new project will most probably have its core in Go. But the SNMP discovery engine will most probably remain in Perl because of a big list of supported vendors.

Advertisements

, , ,

Leave a comment

Installing Go 1.3 in debian wheezy

The original script is found here: http://www.snip2code.com/Snippet/79027/How-to-install-Go-1-3-in-debian-wheezy

The original script is a bit dated, and now 1.3-1 is the latest version:


## File: go1.3-install-deb.sh
apt-get install devscripts build-essential
apt-get build-dep golang-go

wget http://ftp.de.debian.org/debian/pool/main/g/golang/golang_1.3-3.dsc
wget http://ftp.de.debian.org/debian/pool/main/g/golang/golang_1.3.orig.tar.gz
wget http://ftp.de.debian.org/debian/pool/main/g/golang/golang_1.3-3.debian.tar.xz

dpkg-source -x golang_1.3-3.dsc
cd golang-1.3/
debuild -us -uc
cd ..
dpkg -i \
golang-go_1.3-3_amd64.deb \
golang-src_1.3-3_amd64.deb \
golang-go-linux-amd64_1.3-3_amd64.deb \
vim-syntax-go_1.3-3_all.deb

echo Finished

,

1 Comment

Reusing HTTP connections in client-server applications

I’m working on a clientserver application which uses HTTP as a transport protocol for API requests, and sometimes there are occasions with a need to execute a few hundreds requests, such as data import or synchronization.

With default Apache HTTP server settings and default LWP::UserAgent options, every new request would result in a new HTTP session, and each time a DNS query is sent out. So, a synchronization process with a thousand object floods the DNS service with the same requests for the HTTP server name. This results in delays, and some public DNS servers apply rate limits which cause DNS lookup failures (had this with a domain hosted at Godaddy name servers).

HTTP 1.1 protocol supports reusing of persistent connections, but it’s not enabled by default in Apache and in the client.

In Apache HTTP server, the following options need to be configured:

  KeepAlive On
  MaxKeepAliveRequests 500

In the Perl client program, LWP::UserAgent needs the keep-alive option:

  my $ua = LWP::UserAgent->new(keep_alive => 1);

With these modifications, the DNS queries are only sent on every 500th API request, and the HTTP connection is reused between the requests, which saves CPU time on the server. This speeds up the whole process significantly, and also prevents the DNS failures caused by rate limiting.

,

Leave a comment

voxserv.ch and Twitter Bootstrap site templates

Here’s a new website where I promote the VoIP integration services on Swiss market: http://www.voxserv.ch/

The website is built with the Twitter Bootstrap, and here are the templates for template-toolkit which separate the Bootstrap HTML from text content: https://github.com/ssinyagin/voxserv.ch/tree/master/builder

,

Leave a comment

Moving from Subversion to Git

Many teams have their internal scripts in private SVN repositories. It works quite well for most cases. Here’s a short overview of why you may want a transition to Git:

  • Git is distributed. You have the whole history of the project in your local directory. Also your work is independent from central server availability.
  • Branching and merging is in the nature of Git:  whatever new activity you start, you can spin off a new branch and do your commits without disturbing the production code. After testing, you merge your changes into the master branch easily.
  • Bigger choice of transport: Git allows to communicate via SSH, HTTP, its own Git protocol, or by simply archiving and copying your repository if there’s no direct connection to the central server. With SSH transport, you can easily work around strict firewall policies: the pull and push commands are somewhat symmetrical, so if you can’t access the central server for pushing, you can probably pull from your work machine to some intermediate location, and then push to the central server. The possibilities are countless.
  • You can keep all your server or application configuration in Git repositories. This allows you to restore the configuration quickly after an unsuccessful change, and you’re not dependent on the central server availability.
  • You can have as many central servers as you need for your workflow (for example, I use two public repositories for Torrus).

But of course, Git is different, and it needs a bit of learning.  The whole workflow has to be adapted, as well as habits:

  • Your commits don’t go to the central server until you explicitly push them.
  • There is no such thing as revision number. You have SHA hash strings instead.
  • You are encouraged to do more frequent commits, or group them differently. As you do your commits locally without disturbing anyone, you can control the granularity of your commits in a much more flexible way. You can also undo your commits before they are pushed to the central server.

There are many ways to organize your Git repositories:

  • If you work alone and your machine is backed up, you may not even need a central repository.
  • At Github  you can have unlimited public repositories, or a number of private repositories at a cost.
  • Gitorius is a relatively new service, analogous to Github. Also Sourceforge and many other open-source project hosting services offer Git hosting.
  • Gitolite is a nice software that allows you to set up your own Git hosting server easily, with SSH key authentication for multiple users, and access rights. Debian and other packages are available.

UPD: some interesting comments at Hacker News

,

Leave a comment

DENOG3 presentation

I made a presentation at the DENOG3 meeting last week and covered the following Perl-based open-source software products for network management and monitoring:

  1. Torrus, a well-established and mature software for massive SNMP polling and performance monitoring.
  2. Gerty, a new project for network automation. Any tasks on the network devices which need any interaction and automation, are targeted by the tool. The first release is expected soon.
  3. Mooxu, a new project which is currently in its early design phase. The product will provide a platform for distributed network testing and monitoring (eventually it may replace Torrus).

The slideshow PDF is available at the meeting agenda page, and also a video will be available soon.

, , , ,

Leave a comment

Distributed Testing Platform: design concept

Author: Stanislav Sinyagin
Document status: concept draft

UPD: the project name is now Mooxu

Introduction

In many network environments, especially in those of large ISPs or carriers, there’s a need to periodically execute some network tests. For example, an IPTV transport provider would need to make sure that all important multicast streams are available in every part of its edge network. Or a customer support engineer would need to collect byte and packet counters from a particular network port every 5 seconds.

The new software system (Project name: Mooxu) is designed to provide an open-source framework that enables the network operators to build the testing environment for their needs.  Also a number of open-source testing probe modules will be available.

Read the rest of this entry »

, , , , , ,

2 Comments

Autoconf/Automake installer for a Perl program

If you create a Perl application and want it to be installed in some standard part like /usr/local  or /opt/yourapp, it’s quite easy to do with a the standard GNU Autoconf, Automake and a couple of helper scripts. Read the rest of this entry »

, , , , ,

Leave a comment

using Tumblr API v2 from Perl

Tumblr API v2 uses oAuth mechanism for authentication, and this mechanism signs every request with a SHA1 or other hash algorithms.

If your posts contain non-ASCII data, here starts the tricky part. Perl has some special treatment for UTF-8 strings, and incorrect handling can easily lead to a wrong signature.

At first, I tried to use LWP::Authen::OAuth module, as it provides the simplest interface and hides most of oAuth internal logic. Unfortunately, it doesn’t care much about the stging encodings, and UTF8-formatted strings get corrupted and result in invalid signatures.

Net::OAuth appeared to be a bit more complex, and a bit insufficiently documented, but proved to work correctly with UTF-8 data.

Here’s the piece which finally worked. It took awhile to build this example, as the module documentation cuts some important corners.

Read the rest of this entry »

, ,

7 Comments

Git repository with HTTPS write access

(this is to document a setup which was not used in the end, and was replaced with gitolite)

1. Rent a VPS. For example, at TOCICI a minimal VPS costs as little as $33 per year. The following tutorial is tested with Ubuntu 11.04 on such a minimal VPS.

2. Install and configure Lighttpd

aptitude install git-core lighttpd

Read the rest of this entry »

, , ,

2 Comments