Intialize Sample
Created: 2013-01-04 20:37:02
Last updated: 2013-01-05 16:17:55
This workflow saves a tabular *.pckl Python pickle dataset in the local file system, containing values calculated on physical parameters associated to potential companions of a sample of target galaxies. These original physical parameters are extracted from a postgreSQL database, containing information of all galaxies covered by the SDSS spectroscopic survey. The workflow first access the external database located in the AMIGA server and selects the target galaxies from the sample (those having spectroscopic redshift between 0.03 and 0.1). It then creates a tabular gridded datacube with values associated to potential
neighbours. These values are calculates for each point of a 3D space defined by the axes: magnitude in r band, photometric redshift and sigma level of detection. The input default values to build the parameterised
datacube are:
- 14.5<mr><22.5><z><0.11><sigma><3.2>
Auxiliary function libraries and scripts are also copied in local file system, and the PYTHONPATH environmental variable is set to a value provided by the user as the Working Path of the digital experiment.
Other user provided input values are the database connection settings: hostname, login and password.
Execution environment
The first requirement to run the workflows provided by both ROs is Taverna Workbench2 2.4 or higher. AstroTaverna (Taverna plugin) is also needed in order to get functionalities related with Virtual
Observatory web services queries and management of standard VOTable data formats. In general, the execution environment is a Linux distribution including Python4 2.x and a bash shell, with psycopg and numpy Python packages. Access to a PostgreSQL database storing the physical parameters provided by SDSS is also needed; a dump file of database may be downloaded from the AMIGA web server and in order to be deployed and accessible from a local execution environment.
Preview
Run
Run this Workflow in the Taverna Workbench...
Workflow Components
Authors (1)
Titles (1)
Descriptions (1)
This workflow saves a tabular *.pckl Python pickle dataset in the local file system, containing values calculated on physical parameters associated to potential companions of a sample of target galaxies. These original physical parameters are extracted from a postgreSQL database, containing information of all galaxies covered by the SDSS spectroscopic survey. The workflow first access the external database located in the AMIGA server and selects the target galaxies from the sample (those having spectroscopic redshift between 0.03 and 0.1). It then creates a tabular gridded datacube with values associated to potential
neighbours. These values are calculates for each point of a 3D space defined by the axes: magnitude in r band, photometric redshift and sigma level of detection. The input default values to build the parameterised
datacube are:
- 14.5<mr<22.5 - step 0.5
- 0 <z<0.11 - step 0.01
- 0.1<sigma<3.2 - step 0.2
Auxiliary function libraries and scripts are also copied in local file system, and the PYTHONPATH environmental variable is set to a value provided by the user as the Working Path of the digital experiment.
Other user provided input values are the database connection settings: hostname, login and password.
Execution environment
The first requirement to run the workflows provided by both ROs is Taverna Workbench2 2.4 or higher. AstroTaverna (Taverna plugin) is also needed in order to get functionalities related with Virtual
Observatory web services queries and management of standard VOTable data formats. In general, the execution environment is a Linux distribution including Python4 2.x and a bash shell, with psycopg and numpy Python packages. Access to a PostgreSQL database storing the physical parameters provided by SDSS is also needed; a dump file of database may be downloaded from the AMIGA web server and in order to be deployed and accessible from a local execution environment. |
Dependencies (0)
Inputs (6)
Name |
Description |
working_directory |
POSIX Path to the working directory were data will be saved
|
postgresql_server_ip |
Hostname or IP adress of the PostgreSQL server
|
postgresql_server_port |
TCP/IP port number of the PostgresSQL server (default: 5432)
|
db_username |
Database username
|
db_password |
Database password for db_username
|
pickle_filename |
Name of the pickle file to save the initial sample
|
Processors (2)
Name |
Type |
Description |
2par_low.py |
externaltool |
|
get_last_line_of_STDIN |
externaltool |
|
Outputs (3)
Name |
Description |
path_to_galaxy_sample |
|
Tool_STDOUT |
|
Tool_STDERR |
|
Datalinks (10)
Source |
Sink |
db_password |
2par_low.py:dbpasswd |
db_username |
2par_low.py:dbuser |
postgresql_server_port |
2par_low.py:tcpport |
postgresql_server_ip |
2par_low.py:host |
pickle_filename |
2par_low.py:pickle_filename |
working_directory |
2par_low.py:working_directory |
2par_low.py:STDOUT |
get_last_line_of_STDIN:STDIN |
get_last_line_of_STDIN:STDOUT |
path_to_galaxy_sample |
2par_low.py:STDOUT |
Tool_STDOUT |
2par_low.py:STDERR |
Tool_STDERR |
Uploader
License
All versions of this Workflow are
licensed under:
BSD License
Version 2 (latest)
(of 2)
Credits (2)
(People/Groups)
Attributions (0)
(Workflows/Files)
None
Shared with Groups (2)
Featured In Packs (1)
Log in to add to one of your Packs
Attributed By (0)
(Workflows/Files)
None
Favourited By (0)
No one
Statistics
Other workflows that use similar services
(0)
There are no workflows in myExperiment that use similar services to this Workflow.
Comments (0)
No comments yet
Log in to make a comment