Tag: Bioinformatic

Unix  Code more shell scripts (Part. II)

With NGS data, we are experiencing to handle very (very very) large datasets. Whatever the coding language used (PERL, python, C…), the open/read part of a program is (always) very slow. Thus, the debug process could really be a pain due to the time to read the file. Using system unix tools is one of the approach to cut /merge / debug / analyze the content of the files…

Wanna split?

One column contains data with this format XXXX_YYYY and you want to have XXXX and YYYY and two different columns? Easy, use (again) the tr command:

tr '_' '\t' < file > new_file

Faster than a PERL program!

What is my header content?

Somebody has sent you a large file with many (many) columns and you want to know what is the header content (to extract the goog columns with the cut command, for instance)? Whatever the separator used (tab, comma, etc), reading and counting columns on screen is a pain… First, extract the first line with:

head -1 file > header_content

Then, transform your column separator in a carriage-return character (here for a TSV file):

tr '\t' '\n' < header_content > list_header

Finally, to display the header, type:

cat -n list_header

(the -n option will display the line number, easier for the cut command)

How many [XXXX] in my file?

Let’s end with a short introduction about the grep command (I will prepare a more consequent post only for grep next time). The ‘grep’ command is powerful, no doubt about that, but not so easy the first time! Basically, the grep syntax is:

grep [options] my_pattern file

One usefull option is the -w parameter (means ‘word’). Thus, if you only want to ‘grep’ a specific pattern (i.e. chr and not chromosome), you should use this parameter.

software  DGD website released!

The Duplicated Gene Database website has been released! This database provides a simple way to quickly and easily select groups of tandem duplicate or large groups of multigene family by gene identifier, chromosomal location and/or keywords. This database could be useful for various fields of applications (structural genomic, gene expression profiling, gene mapping, gene annotation) and could be expanded to others genomes as genomic annotations and peptide sequences are available.

homepage

http://dgd.genouest.org

software  AnnotQTL accepted for publication in the NAR 2011 webserver edition

NAR just have let us know that our manuscript of AnnotQTL has been accepted for publication. The manuscript will be online in 2-3 weeks! In addition to the first release of AnnotQTL, we have added several ‘missing’ features:

  • The assembly version and the date of the request are now inserted into the exported text file (TSV or XML).
  • You can now run multiple analyses of several QTL regions (via a specific form in the ‘submit a request’ section). Of course, in this ‘multiple analyses’ mode, the highlight features still work (but for a unique biological function, i.e.: reproduction trait for all the QTL regions).

AnnotQTL can be found at http://annotqtl.genouest.org. We have also decided to add new features in a close future (based on the referee’s comments), such as to define a genomic region based on the STS markers surrounding a QTL region. A contact form is available on the official AnnotQTL website, you can leave a comment or ask for new species. The article is available here.

Unix  Code more shell scripts (Part. I)

I’ve noticed, that I didn’t code a lot using shell language: very few sh scripts (for multiple commands executions), only basic commands. It’s a mistake, since for file management (for instance), shell scripts are more powerful than any program you could code using « higher » language. So, let’s start with some usefull commands…

Columns work…

Use the ‘cut’ command to extract the 1st and the 3th columns:

cut -f 1,3 file > new_file

I’d encourage you to type ‘man cut’ to access the whole documentation of this command: there’s another input parameter, as the ‘-d’, which permits to set the delimiter between column.

Extract the 1st column and keep unique element of this column:

cut -f 1 file | sort | uniq > unique_id

Please notice that you have to sort the data before extracting the unique element, since the ‘uniq’ command discards only the  successive identical lines from the input.

Remove the d?*$% ^M character (aka windows-carriage-return)

You may have friends or colleagues which are still working with Windows (everybody has his weak point)… But their files are a bit annoying… Indeed, there are full of ‘^M’ with no carriage return! Your program will go wrong when analysing these files (since your *nix program will search for the ‘\n’ character for the end of the line)… How to remove them? There’s plenty of solutions using perl, awk, sed, but, again, the simpliest (in my opinion) manner  is to use the basic unix command: ‘tr’…

>cat windows_file
aaa     123^Mbbb        456^Mccc        789^Mddd        123^Meee        456
> tr '\r' '\n' < windows_file > unix_file
> cat unix_file
aaa     123
bbb     456
ccc     789
ddd     123
eee     456

Nice, isn’t it? Of course the ‘tr’ command wasn’t invent to translate the ^M character from windows file (!!!). Using the ‘-d’ parameter, you can also delete some characters in a file.

Let’s loop…

This is my oldest csh script (from the 20th century!!!), but maybe my usefull one! The principle is quite simple: assume you have several datafiles to analyse using a program, this script will get the number of analyses to run and will run the main program with each datafiles. Here’s the script:

#!/bin/csh
# get file of filename
 set file = $1
# get number of datafiles
set nl = `wc -l < $file`
set i = 1

# loop on datafiles
while ($i <= $nl)
	set line = `head -$i $file | tail +$i`
	path_to_program $line &
	@ i++
end

echo Done.

The trickiest part is the ‘head-tail’ combination. Let’s explain: this command extracts the x first line (head) and returns the last line of this extract (tail)… Then, during the first loop, the first line of the file is extracted and the ‘tail’ command extracts the last (i.e. the same, actually). During the second loop, the ‘head’ command extracts the 2 first lines, and the ‘tail’ command still extracts the last, and so on… At each iter, ‘$line’ contains the name of a datafile. This kind of script is particularly usefull to submit an executable script to a batch server (i.e. using the qsub command on PBS system, for instance).

Let’s manipulate (large) datafile

The last case: you have thousands of datafiles of GEO experiments, which contain many lines of gene expression data for many NCBI geneid (1st column). You want to regroup all the gene expression data from many GSE for each geneid… Typically, you have to read each GSE file, extract lines corresponding to each geneid and put them into separate files. First, I tried to perform this task with a PERL program (too long). Then, a colleague told to do it with shell script and show me the way (thx)… Here’s the final script:

#!/bin/bash

echo "merging datafiles..."
cat GSE* > dump

echo "get unique GeneIDs..."
listID=`cat dump | sort | cut -f 1 | uniq`

echo "exporting gene expression data to GeneID files..."
for id in $listID
do
	grep "^$id\W" dump > $id
done
rm dump
echo "done."

The most difficult part was to find out the right pattern in the grep regex to only extract the corresponding geneid lines.

That’s all for this time. Hope this will help!

software  AnnotQTL website has been released!

Recently, we have released a new website: AnnotQTL, a web tool designed to gather the functional annotation of different prominent website though limiting the redundancy of information.

The last steps of genetic mapping research programs require to closely analyze several QTL regions to select candidate genes for further studies. Despite the existence of several websites (NCBI genome browser, Ensembl Browser and UCSC Genome browser) or web tools (Biomart, Galaxy) to achieve this task, the selection of candidate genes is still laborious. Indeed, information available on the prominent websites slightly differs in terms of gene prediction and functional annotation, and some other websites provide extra-information that researchers may want to use. Merging and comparing this information can be done manually for one QTL containing few genes but would be hardly possible for many different QTL regions and dozen of genes. Here we propose a web tool that, for the region of interest, merges the list of genes available in NCBI and Ensembl, removes redundancy, adds the functional annotation of different prominent web site and highlights the genes for which the functional annotation fits the biological function or diseases of interest. This tool is dedicated to sequenced livestock animal species (cattle, pig, chicken, and horse) and the dog as they have been extensively studied (indeed, more than 8000 QTL were detected).

The AnnotQTL server could be found here : http://annotqtl.genouest.org/

Unix  How to mirror a FTP site?

For research purposes, I want to mirror the GEO directory of the NCBI FTP site. These data are huge: more than 1,000 annotation platforms files and more than 20,000 GSE files! Platforms files are updated each month and GSE files on a daily basis. As you may imagine, I quickly forgot the idea of developping some PERL programs to achieve this task.

I first try to use a software dedicated to the mirroring of scientific databases: BioMaj. With the annotation platforms files, BioMaj works fine: files were quickly downloaded and inserted in SQL databases with the management of post-process scripts included in BioMaj. But, with the GSE files, I think I reached the limit of BioMaj, because of the structure of the GSE datafiles. Indeed, the « seriesmatrix » directory is organized in many subdirectories (almost 20,000 in which there are at least one file) and BioMaj spent more than 6 hours only to list the directory content!!! Then, the downloading and the management of the files (as BioMaj handles files versioning) were quite long, too! More problematic is that BioMaj freezed / halted / stopped during the update process. After a while, I finally try another way to make a local mirror: using classic unix tools, as… lftp, which has a mirroring and a multi-threading features!

Using lftp is quite simple, I found some usefull shell scripts here, which I’ve modified:

#!/bin/bash
HOST="ftp.ncbi.nih.gov"
USER="anonymous"
PASS="anonymous"
LOG_dir="/data/GEO/logs"
LCD="/data/GEO/platforms"
RCD="/pub/geo/DATA/annotation/platforms"
DATE=`date '+%Y-%m-%d_%H%M'`
lftp -c "set ftp:list-options -a;
open ftp://$USER:$PASS@$HOST;
lcd $LCD;
cd $RCD;
mirror --verbose --exclude-glob old/ --parallel=10 --log="$LOG_dir/GPL_$DATE" "

This script works with the GSE datafiles and it’s relatively fast (about 45 min to get the directory listing and less than one night (!) to download the files.

PERL  Biomart and error 500…

One great thing with Biomart is that you can export your query to PERL code. You set up your query once a time and then, you can launch your PERL script to update your databases, for example (after installing the Biomart API which it’s not an easy part, especially with the registry files). It sounds great, but, in practice, it doesn’t work at all very well for big queries: Bandwitch is very very very slow, and most of the time, we will get an error 500 read timeout… So, you re-launch your script, and again, and again… After a while, you will get upset about Biomart, trust me!

So, I searched in the biomart help, and I found the MartService. As the help is « missing », I tried the example in the website. To my mind, it is not particularly clear (what the POST example means ?) or working (the wget command returned me an error). So, I tried different things. I first picked up the XML file (XML button on top right). By the way, the « Unique Results only » option didn’t work in the XML file: no matter the option was selected or not, the XML file still had the following option: uniqueRows = ’0′ (don’t forget to change to ’1′, or your files will be very very very big). After hanging around the website for a while, I had copy/paste the content of the file WebExample.pl :

# an example script demonstrating the use of BioMart webservice
use strict;
use LWP::UserAgent;

open (FH,$ARGV[0]) || die ("\nUsage: perl webExample.pl Query.xml\n\n");

my $xml;
while (<FH>){
    $xml .= $_;
}
close(FH);

my $path="http://www.biomart.org/biomart/martservice?";
my $request = HTTP::Request->new("POST",$path,HTTP::Headers->new(),'query='.$xml."\n");
my $ua = LWP::UserAgent->new;

my $response;

$ua->request($request,
	     sub{
		 my($data, $response) = @_;
		 if ($response->is_success) {
		     print "$data";
		 }
		 else {
		     warn ("Problems with the web server: ".$response->status_line);
		 }
	     },1000);

Then, I used the following command (as indicated in the example):

perl WebExample.pl myfile.xml

… And… It worked: data were flowing in my terminal! wünderbar! Finally, don’t mess with the installation of Biomart API and configurations of registry files (not tricky at all) if you just want to automatically update your data: use the XML approach with the LWP script. It’s easier, faster and you won’t get error 500 read timeout.

PERL  UNIX & PERL starter kit…

If you’re looking for a nice introductory lesson to learn the basis of UNIX commands and PERL programming aspects, I recommend you to get the PDF file that you would find on the Korf lab website. The lesson is entitled « Unix and Perl Primer for Biologists« . This course is aimed to people « with no prior experience » in either programming or UNIX (as mentionned in the web page). I think it might be usefull for a lot of people. Enjoy…

software  G3C

What is G3C ?

We have developed a software package called G3C (for Get all Co-annotated Co-located Clusters) using the PERL language. This software has been designed to identify all groups of co-located genes, which share a similar GO annotation on a genome scale. Basically, the principle of the software is the following: within a genomic window, it computes a p-value of the existence of a cluster of co-annotated genes for a specific similarity group of GO terms given by the hypergeometric distribution.

How does it work?

You need a functional PERL environment to use the software (the needed modules are mentioned in the online documentation). You will also need a SQL database to store data (pre-processing scripts are in the package).

Source

To download the G3C package, click on the following link: coming soon