Restricting Linux Logins to Specified Group

If you have linux boxes that authenticate over ldap but want logins for specific boxes to be restricted to a particular group, there is a simple way to achieve this.

Firstly, create a new file called /etc/group.login.allow (it can be called anything – you just need to update the line below to reflect the name)

In this file, pop in all the groups that should be able to login

admin
group1
group2

Edit /etc/pam.d/common-auth (in ubuntu), it might be called /etc/pam.d/system-auth or something else very similar. At the top of the file (or at least above other entries, add the following line:

auth required pam_listfile.so onerr=fail item=group sense=allow file=/etc/group.login.allow

For the record, found this little tidbit over at the centos forums

Exporting X11 to Windows [1109]

Playing Skyrim the last week, sometimes I just missed Linux so terribly that I wanted a piece of it and not just the command line version. I wanted X Windows on my Windows 7.

There has been a solution for this for several years and the first time I did this, I installed cygwin with X11 but there is a far simpler way to accomplish this.

Install XMing. I then used putty, which has the forward X11 option. Once logged in, running xeyes shows the window exported onto my Windows 7. Ah.. so much better.

I actually used this to run terminator to connect to a number of servers. Over local LAN, the windows didn’t have any perceptible lag or delay. It was more or less like running it locally.

It is possible to set up shortcuts to run an application through putty and have it exported to your desktop. I haven’t played with this enough to comment though.

This of course only worked because I have another box which is running Linux. If that is not the case for you, then you might want to try VirtualBox but since the linux kernel developers have described the kernel modules as tainted crap, you might want to consider vmware instead which is an excellent product.

Saving your workspace window configuration in Linux [1102]

I am usually working on a good half a dozen things at any given time and this means that I usually have a good ten or twenty windows open. My chromium currently has a 134 tabs and this is after I  cleaned up and closed all the tabs I no longer need.

Luckily, working in Linux means that I can spread each stream of work into the various workspaces.

Now GNOME 3 makes things a little more complicated with the dynamic workspaces but I’m learning to use it to my advantage

However, with Ubuntu 11.10 Oneiric Ocelot and GNOME 3, I seem to be running into an issue regularly…If I leave my computer for a while, it doesn’t unlock correctly. The screen remains black and I can’t move the mouse to my second screen and the unlock screen doesn’t show up.

Thinking about it, it seems like there might be two screen savers being started but I shall investigate that tomorrow. I have the same issue at both work and home so it is more likely to be related to Ubuntu + GNOME 3 or something about the way I set things up.

I  usually resolve this by logging into the console and here a neat trick for killing all our processes in one fell swoop.

$ kill -9 -1

Another thing I have been doing a bit more of recently is gaming which involves rebooting in Windows.

Both of the above leaves me with a restarted workspace. Starting up the applications pops them all into the same workspace. Chrome is especially a nightmare. I might have 135 open tabs but they are in about 6 windows spread across four workspaces.

It is annoying to have to distribute these things out each time.

Continue reading

Synergy with Linux Server & Mac Client

I  borrowed a mac to try and play with iPhone development. I already have a linux box (running Ubuntu 9.10). Anyone who has used two computers simultaneously know how annoying it is to have two keyboards/mice plugged. I originally anticipated just using X11 forwarding. However, it is an iMac with a big beautiful screen. It would be an absolute waste to not use it.

Continue reading

Perfect Linux

According to Brian Lunduke, Ubuntu 9.10 is almost perfect, and I concur.

Being a bit of a purist, I ran Debian for very many years but found their stable releases lagging behind far too much. This was largely due to their perfectly understandable view of it being ready only when it is right.

For a while, I ran their unstable distribution called Sid, based on the disturbed, hyperactive 10 year old boy in the film Toy Story. The idea being that Sid breaks things, and it certainly did. While it taught me a heck of a lot about linux (and the terminal), my computer was broken on a very regular basis.

Continue reading

Vista Guest, Linux Host, VirtualBox, Host Networking – Bridge

One would think that it would be straightforward, work off the bat, or at least have some reasonable documentation. Unfortunately, no!

I needed host networking to be able to access network resources (Samba shares etc.) which does not work if the guest OS is on NAT 😦

Solving it was easy though… I assume Vista is installed as a guest with the guest additions and that your user account is a part of the vboxusers group.

On the linux host, first install bridge utils. I run Ubuntu, so it was as easy as:

$ sudo aptitude install bridge-utils

Next, you need to set up the bridge; again, easy on Ubuntu:

add the following section to /etc/network/interfaces

auto br0
iface br0 inet dhcp
bridge_ports eth1

Add the interfaces to VirtualBox

$ sudo VBoxAddIF vbox0 ‘shri’ br0

Within the VirtualBox Guest settings, choose Host Networking and fo the interface, choose br0

bring the interface up:

$ sudo ifup br0

and start your guest os… et voila, it just works…

Making Twitter Faster

From my perspective, Twitter has a really really interesting technical problem to solve. How to store and retrieve a large amount of data really really quickly.

I am making some assumptions based on how I see twitter working. I have little information about how it is architected apart from some posts that suggests that it is running ruby on rails with MySQL?

Twitter is in the rare category where there is a very large number of data being added. There should be no updates (except to user information but there should be relatively very small amount of that). There is no need for transactionality. If I guess right, it should be a large amount of inserts and selects.

While a relational database is probably the only viable choice for the time being, I think that twitter can scale and perform better if all the extra bits of a relational database system was removed.

I love challenges like this. Technical ones are easier 😉

If I didn’t have a lifetime job, I would prototype this in a bit more depth. Garry pointed me in the direction of Hadoop. Having had a quick look at it, it can take care of the infrastructure, clustering and massive horizontal scaling requirements.

Now for the data layer on top. How to store and retrieve the data. HBase is probably a good option but doing it manually should be fairly straightforward too.

From my limited understanding of twitter, there are two key pieces of functionality, the timelines and search.

The timelines can be solved by storing each tweet as a file within a directory structure. My tweets would go into

/w/o/r/d/s/o/n/s/a/n/d/<tweet-filename>

The filename would be <username>-<timestamp>

For the public timeline, you just have a similar folder structure, but with the timestamp, for example, the timestamp 1236158897 would go into the following structure as a symlink

/1/2/3/6/1/5/8/8/9/7/<username>

For search, pick up each word in the tweet and pop the tweet as a symlink into that folder. You could have a folder per word or follow the structure above.

/t/w/i/t/t/e/r/<username>-<timestamp> OR

twitter/<username>-<timestamp>

You would then have an application running on top with a distributed cache with an API to ease access into the data easier than direct file access. Running on Linux, the kernel will take care of the large part of the automatic caching and buffering as long as there is enough RAM on the box.

This can in theory be done without Hadoop in between and separating the directory structures across multiple servers but that can have complications of its own, especially with adding and removing boxes for scalability.

You are also likely to run into issues with the number of files / sub-directories limits but they can be solved by ‘archiving’ – multiple options for that too…

Thinking about this problem brought me back to the good old days of working on the search mechanism within megabus.com. We needed the site to deal with a large number of searches on limited hardware when the project was still classified as a pilot.

With some hard work and experimentation, we were able to reduce the search time to a tenth of the original time.

I’ll admit that I don’t know the details or the intricacies of the requirements that twitter has. I have probably over-simplified the problem but it was still fun to think about. If you can think of problems with this – let me know; I wanna turn them into opportunities 😉

Customisation

Being an avid Linux user for users, I am seriously spoilt in terms of being able to customise everything / anything to be more the way I want it to be…

Two main reasons for this is that most software that comes on Linux is highly customisable to start off with. The second reason is that if you don’t like something, you can change it.

There is also the nice thing that most things that you think would be cool or useful in software is already available in some form since someone else thought so too, but before you did and has had the chance to spend some time building it.

I love this so much so that I have often put together a quick linux box for doing things that one could easily replace with an embedded device like a router. I have swayed between the two options based on how much I want simplicity vs flexibility.

One of my favourite responses to someone telling me that we need something that we don’t have is – “we’ll build one”… The software customisation / writing has turned into a metaphor that I apply across more and more things. You need a new table with custom bits – let’s build it. You need a classic car with all the modern gizmos – you know what – let’s just build it.

This has its pro’s and cons. For one, it feels like anything is possible. It also becomes very frustrating to work with limited, limiting, or closed source software (esp when you just want to fix a quick bug that really irks you). It also eats up all your time as you try and do all the things you want… just because you can…

Striking a balance is hard especially when a client asks if it is possible to do something very specific. The answer is of course yes and there is a question that goes with that response. At what value does it become cost effective and provide a good Return On Investment(ROI)

Controversy

We have never been shy about voicing our opinions or being controversial. While discussing some PR requirements recently with a potential agency, the question was asked about whether we would be willing to be controversial.

We are not necessarily controversial, just that we hold a view that is usually a little different from the mainstream views. It could be said that we bring the alternative to the mainstream.

But then, so did some world governments, bringing open source software into their work places, successfully or unsuccessfully in the last few years instead of Microsoft.

Someone recently suggested that we were anti-microsoft. I don’t think that is case. Microsoft has its place in a technology infrastructure. It is simply that its position is usually overrated or misplaced. As far as desktops for technically shy users are concerned, there is really no alternative but Microsoft Windows. I can hear the Mac users scream that Macs are also an alternative. Theoretically, yes but the fact is that they are too expensive for someone to dabble with it. This is precisely the reason that Microsoft Windows dominates the desktop market.

We support and use Linux. In fact, the majority of the desktops in the office run Linux (Ubuntu as it happens) but people who have a non-technical role use Windows. They could use Linux but Windows is better suited to their role.

This is not necessarily a cost-saving decision. Sure, we have saved thousands of pounds by sticking to Linux instead of using Windows but that is a co-incidence more than anything. In some ways, it is a testament to the skillset of the people who work at Kraya that they are comfortable with Linux. The mindset of Linux is in alignment with the mindset of a developer.

I used to develop in Windows and I often found myself fighting with Windows, whereas with Linux, it just fits. There are several reasons for this. One being that Linux forces you to understand what you (trying to ) do to a bit more depth instead of pretending its magically taken care of.

I am not, for one moment implying that developers who use or develop on the Windows platform is inferior or not as skilled. Simply that my experience was that the Windows platform made it easier to do things badly and more difficult to do things well.

Microsoft has done wonders in bringing technology to the masses and making it more accessible. However, there is still a massive barrier, even for people specifically in the technology sector to appreciate and use technologies which require a bit more experience or knowledge to use appropriately.

There are a couple of really good examples. PostgreSQL is a powerful outstanding database server that can easily compete with Microsoft SQL Server and Oracle. However, very few people know about it and even fewer use it.

MySQL on the other hand is also an open source database server but is much more widely used and accepted.

It surprises me when MySQL is used when PostgreSQL is, from a technical perspective better suited. MySQL is faster than PostgreSQL at the cost of poor transaction managment (at best). For any system where data integrity is even remotely important, PostgreSQL is a better choice. However, since there are better GUI tools for MySQL and since it is easier to get the hang of, it gets chosen.

This give technology and people in that sector a bad name. Every tool or software has its place, and should be used in an environment where its strengths are displayed, not its weaknesses. We have instances where we use multiple database servers within one project. PostgreSQL for all the data integrity sensitive areas and MySQL for the speed sensitive areas. Sometimes you want integrity and speed. In these cases, you have to make a choice based on which is more important or layer the databases to use the strengths of both.

Metaphorically speaking, MySQL is a hammer, and PostgreSQL is a sledgehammer. Would you use a sledgehammer to crack a nut, or a hammer to crack a slab of concrete?

Before someone jumps down my throat, I am not suggesting that PostgreSQL is better than MySQL or vice versa – just that they both have different goals, different strengths and weaknesses. They have spent a lot of effort to converge and strengthen their weaknesses but not matter the amount of convergence, their core goals are still different that they will never truly be able to remove their weaknesses without giving up some of their strengths as well. One tool cannot be both a hammer and a sledgehammer…

On top of Tasktop

My post about tracking time attracted the attention of Tasktop. While this had been mentioned to me before, I was mistakenly under the impression that this was a windows only app.

I was pleased to find out that this was also available for linux. Great… Lets try it out.

First stumbling block is the requirement to register on the website before I can download a trial. I am a firm believer of try before you buy. I should be able to register but it should be entirely my choice.

I am more comfortable with registering before buying or for the use of a free piece of software. However, registering for a trial always irritates me. This was also the case when I wanted to trial InDesign / Illustrator the other day.

After registering, there was the irritating wait for the email to arrive. Now, this is irritating. When I want something, I want it NOW. I hate waiting. Adobe did not make me wait for the confirmation email of registration before downloading the trials. There are two good reasons as to why this irritates me.

  1. Email, as reliable as it is generally, can take time. In theory, this can be anywhere from a few seconds to hours. How about if my mail server is currently down. Or even more importantly, what if I have shut down my mail client so that it does not keep distracting me from something that I am trying to do. Opening up my mail client, I now want to find out about the other emails that are in my inbox and whether any of them require an action…
  2. I have reluctantly provided details about myself. Confirming my email address before I am allowed to download a trial suggests that Tasktop does not trust me enough to just let me download the trial. The software has started off on the wrong foot. How much of an issue is it really if someone gave the wrong details before downloading a trial. Is it really that important that you are able to keep bugging them via email to buy the product?

I was curious enough to jump through the hoops to download the product. The first thing I noticed is that there is no 64bit for Linux :-(. More steps involved in installing this on my 64bit machine. So instead, I installed it one of my 32bit machines – save time.

Once the download completed, the steps on the website suggested that I needed to configure it (with ./configureTasktop.sh) and then run Tasktop. The configuration step required no input from the user and outputted nothing. I have to ask:

  1. Why is the configuration step not integrated into Tasktop and configured to run once? Alternatively,
  2. Why does the configuration step, not start Tasktop right after.
  3. Even better: Make Tasktop a symlink to configureTasktop.sh, which then relinks that to the Tasktop Binary with the configureTasktop running Tasktop right after. This means that from the users perspective, they are always running the same command, and you save any cost associated with run once checks.

I finally got Tasktop to run and it asks me if I want to install the firefox addon to integrate with Tasktop. I want to see how it integrates, so I do. Of course, this is yet another step.

A restart later, I was ready to try out Tasktop – or was I? We use bugzilla to track tasks and I wanted to integrate that in similar to how I do it in Eclipse. This was also trickier than I expected.

I went into the partner connectors section which did not cover bugzilla, which I assumed meant that it came with Bugzilla integration by default. This is true but how the hell do I get there to configure it. It took me a little while to find the configuration section (there are no menus). Once I was there, I wanted to get back to the original layout which was tricky since the “close configuration” button was nicely hidden away up at the top right.

Once I had this working, I tried out the active/deactive mechanisms and this works just the same as in Eclipse. Except with the Firefox plugin, it adds in the links that you browse as part of your context – GREAT!

Add in a task to blog about it and went through writing half the document, then decided to de-activate it before I started working on something else. All the firefox tabs were closed – again, great…

The problem is that when you re-activate the context, it just clears the tabs in firefox and shows you the links you last had open. The page titles for the pages that I had open were the same for a few, so going through them trial and error to get to the blog post was tricky. More importantly, the cookie was already gone and I had to re-login. This might be a timeout issue with WordPress so wont tag that against Tasktop.

I haven’t tried linking folders / files yet but considering that with the above process taking me more time than I expected due to the sheer number of steps involved, I shall have to leave that to another day. In all honesty, it might never happen.

I do like the time logging feature of Tasktop as it tells me which tasks I spent my time on in different chart formats. This is great. However, I have a problem in that this is on an individual basis. I see nothing on here about how a team leader can link in Tasktop used by the team to calculate total time spent on a project / task. This is a necessary feature for a tool like this in the team environment.

It is possible that all of this is easier in a windows environment. Possibly because it was built on there, but more likely because Windows users are used to taking several steps to achieve something (what is it – 7 clicks to delete a file in Vista?)

Having ranted on for a while, dont get me wrong. I think that Tasktop is a fantastic concept and with a bunch of tweaking can be a very intuitive tool to use. However, at the stage that it is in, it does not do what I need it to do. It is actually more obtrusive than useful (e.g. by removing all my tabs from firefox when switching out of a context and not re-instating them on going back to the context).

Then, it is probably just because I simply expect too much… 😦