Tag: Genomic

software  DGD website released!

The Duplicated Gene Database website has been released! This database provides a simple way to quickly and easily select groups of tandem duplicate or large groups of multigene family by gene identifier, chromosomal location and/or keywords. This database could be useful for various fields of applications (structural genomic, gene expression profiling, gene mapping, gene annotation) and could be expanded to others genomes as genomic annotations and peptide sequences are available.

homepage

http://dgd.genouest.org

software  AnnotQTL accepted for publication in the NAR 2011 webserver edition

NAR just have let us know that our manuscript of AnnotQTL has been accepted for publication. The manuscript will be online in 2-3 weeks! In addition to the first release of AnnotQTL, we have added several ‘missing’ features:

  • The assembly version and the date of the request are now inserted into the exported text file (TSV or XML).
  • You can now run multiple analyses of several QTL regions (via a specific form in the ‘submit a request’ section). Of course, in this ‘multiple analyses’ mode, the highlight features still work (but for a unique biological function, i.e.: reproduction trait for all the QTL regions).

AnnotQTL can be found at http://annotqtl.genouest.org. We have also decided to add new features in a close future (based on the referee’s comments), such as to define a genomic region based on the STS markers surrounding a QTL region. A contact form is available on the official AnnotQTL website, you can leave a comment or ask for new species. The article is available here.

software  AnnotQTL website has been released!

Recently, we have released a new website: AnnotQTL, a web tool designed to gather the functional annotation of different prominent website though limiting the redundancy of information.

The last steps of genetic mapping research programs require to closely analyze several QTL regions to select candidate genes for further studies. Despite the existence of several websites (NCBI genome browser, Ensembl Browser and UCSC Genome browser) or web tools (Biomart, Galaxy) to achieve this task, the selection of candidate genes is still laborious. Indeed, information available on the prominent websites slightly differs in terms of gene prediction and functional annotation, and some other websites provide extra-information that researchers may want to use. Merging and comparing this information can be done manually for one QTL containing few genes but would be hardly possible for many different QTL regions and dozen of genes. Here we propose a web tool that, for the region of interest, merges the list of genes available in NCBI and Ensembl, removes redundancy, adds the functional annotation of different prominent web site and highlights the genes for which the functional annotation fits the biological function or diseases of interest. This tool is dedicated to sequenced livestock animal species (cattle, pig, chicken, and horse) and the dog as they have been extensively studied (indeed, more than 8000 QTL were detected).

The AnnotQTL server could be found here : http://annotqtl.genouest.org/

Unix  How to mirror a FTP site?

For research purposes, I want to mirror the GEO directory of the NCBI FTP site. These data are huge: more than 1,000 annotation platforms files and more than 20,000 GSE files! Platforms files are updated each month and GSE files on a daily basis. As you may imagine, I quickly forgot the idea of developping some PERL programs to achieve this task.

I first try to use a software dedicated to the mirroring of scientific databases: BioMaj. With the annotation platforms files, BioMaj works fine: files were quickly downloaded and inserted in SQL databases with the management of post-process scripts included in BioMaj. But, with the GSE files, I think I reached the limit of BioMaj, because of the structure of the GSE datafiles. Indeed, the « seriesmatrix » directory is organized in many subdirectories (almost 20,000 in which there are at least one file) and BioMaj spent more than 6 hours only to list the directory content!!! Then, the downloading and the management of the files (as BioMaj handles files versioning) were quite long, too! More problematic is that BioMaj freezed / halted / stopped during the update process. After a while, I finally try another way to make a local mirror: using classic unix tools, as… lftp, which has a mirroring and a multi-threading features!

Using lftp is quite simple, I found some usefull shell scripts here, which I’ve modified:

#!/bin/bash
HOST="ftp.ncbi.nih.gov"
USER="anonymous"
PASS="anonymous"
LOG_dir="/data/GEO/logs"
LCD="/data/GEO/platforms"
RCD="/pub/geo/DATA/annotation/platforms"
DATE=`date '+%Y-%m-%d_%H%M'`
lftp -c "set ftp:list-options -a;
open ftp://$USER:$PASS@$HOST;
lcd $LCD;
cd $RCD;
mirror --verbose --exclude-glob old/ --parallel=10 --log="$LOG_dir/GPL_$DATE" "

This script works with the GSE datafiles and it’s relatively fast (about 45 min to get the directory listing and less than one night (!) to download the files.

PERL  Biomart and error 500…

One great thing with Biomart is that you can export your query to PERL code. You set up your query once a time and then, you can launch your PERL script to update your databases, for example (after installing the Biomart API which it’s not an easy part, especially with the registry files). It sounds great, but, in practice, it doesn’t work at all very well for big queries: Bandwitch is very very very slow, and most of the time, we will get an error 500 read timeout… So, you re-launch your script, and again, and again… After a while, you will get upset about Biomart, trust me!

So, I searched in the biomart help, and I found the MartService. As the help is « missing », I tried the example in the website. To my mind, it is not particularly clear (what the POST example means ?) or working (the wget command returned me an error). So, I tried different things. I first picked up the XML file (XML button on top right). By the way, the « Unique Results only » option didn’t work in the XML file: no matter the option was selected or not, the XML file still had the following option: uniqueRows = ’0′ (don’t forget to change to ’1′, or your files will be very very very big). After hanging around the website for a while, I had copy/paste the content of the file WebExample.pl :

# an example script demonstrating the use of BioMart webservice
use strict;
use LWP::UserAgent;

open (FH,$ARGV[0]) || die ("\nUsage: perl webExample.pl Query.xml\n\n");

my $xml;
while (<FH>){
    $xml .= $_;
}
close(FH);

my $path="http://www.biomart.org/biomart/martservice?";
my $request = HTTP::Request->new("POST",$path,HTTP::Headers->new(),'query='.$xml."\n");
my $ua = LWP::UserAgent->new;

my $response;

$ua->request($request,
	     sub{
		 my($data, $response) = @_;
		 if ($response->is_success) {
		     print "$data";
		 }
		 else {
		     warn ("Problems with the web server: ".$response->status_line);
		 }
	     },1000);

Then, I used the following command (as indicated in the example):

perl WebExample.pl myfile.xml

… And… It worked: data were flowing in my terminal! wünderbar! Finally, don’t mess with the installation of Biomart API and configurations of registry files (not tricky at all) if you just want to automatically update your data: use the XML approach with the LWP script. It’s easier, faster and you won’t get error 500 read timeout.

software  G3C

What is G3C ?

We have developed a software package called G3C (for Get all Co-annotated Co-located Clusters) using the PERL language. This software has been designed to identify all groups of co-located genes, which share a similar GO annotation on a genome scale. Basically, the principle of the software is the following: within a genomic window, it computes a p-value of the existence of a cluster of co-annotated genes for a specific similarity group of GO terms given by the hypergeometric distribution.

How does it work?

You need a functional PERL environment to use the software (the needed modules are mentioned in the online documentation). You will also need a SQL database to store data (pre-processing scripts are in the package).

Source

To download the G3C package, click on the following link: coming soon