• Configuring PostgreSQL Repository

    This article provides information about configuring PostgreSQL Repository connection

    To create a new repository connection

    Click Options
    Click  Plus
    Enter connection name
    Select type
    Enter username and password
    Click ODBC manager and create new ODBC DSN
    Select newly created DSN
    Make sure that connection actually works

    Configure PostgreSQL

    PostgreSQL ODBC settings









    Direct link, no registration required.
  • ETL for PostgreSQL

    Great news Visual Importer ETL works now directly with PostgreSQL databases. The direct connection gives it a massive performance boost so you can load more than 10000 records per second.

    Here is the Proof

    Information    08/08/2009 13:43:14    Starting...
    Information    08/08/2009 13:43:14    Target Table: test
    Information    08/08/2009 13:43:14    Trasnformation Type:  ADD Records
    Information    08/08/2009 13:43:14    truncate table test
    Information    08/08/2009 13:43:14    Path: C:\Source Files\
    Information    08/08/2009 13:43:14    Mask: test_data_delimited_large.csv
    Information    08/08/2009 13:43:14    Found: 1 File(s) to read
    Information    08/08/2009 13:43:14    Source File: C:\Source Files\test_data_delimited_large.csv
    Information    08/08/2009 13:43:43    Read 352031 Line(s)
    Information    08/08/2009 13:43:43    Processed : 352031 Record(s)
    Information    08/08/2009 13:43:43    Records per second : 12321.7
    Information    08/08/2009 13:43:43    Rejected : 0 Record(s)
    Information    08/08/2009 13:43:43    Inserted : 352031 Record(s)
    Information    08/08/2009 13:43:43    Updated : 0 Record(s)
    Information    08/08/2009 13:43:43    Deleted : 0 Record(s)
    Information    08/08/2009 13:43:43    Filtered : 0 Record(s)
    Information    08/08/2009 13:43:43    Time Taken : 00:00:28
    Information    08/08/2009 13:43:43    Transformation is finished

    And here is what our customers think about it:

    I just transferred 10,000,000 million records from the main SQL Anywhere database over the network into a Postgres database without a hitch.

    Thank you
    Thank you
    Thank you


    !!!!! You guys rock!!!!!!!!

    Visual Importer Loading data into PostgreSQL datatabase

  • Importing data into PostgreSQL

    Load Data into PostgreSQL from any data source

    The latest version of Visual Importer ETL offers full support for PostgreSQL. Data can be imported from Flat files, Excel, MS Access, Oracle, MySQL, Interbase, Firebird, PostgreSQL, OleDB, ODBC and DBF files

    Full support for Unicode

    All versions of PostgreSQL are supported including version 9.0.1

    For every database, file, data source we use the best possible way of importing data

    Loading Data into PostgreSQL


    PostrgreSQL Logo

    About PostgreSQL

    PostgreSQL is a powerful, open-source object-relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL:2008 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.

    An enterprise-class database, PostgreSQL boasts sophisticated features such as Multi-Version Concurrency Control (MVCC), point in time recovery, tablespaces, asynchronous replication, nested transactions (savepoints), online/hot backups, a sophisticated query planner/optimizer, and write-ahead logging for fault tolerance. It supports international character sets, multibyte character encodings, Unicode, and it is locale-aware for sorting, case-sensitivity, and formatting. It is highly scalable both in the sheer quantity of data it can manage and in the number of concurrent users, it can accommodate. There are active PostgreSQL systems in production environments that manage in excess of 4 terabytes of data. Some general PostgreSQL limits are included in the table below.

    More information about PostgreSQL

    Direct link, no registration required.
  • Time dimension for PosgreSQL based data warehouse

    This SQL script will create and populate the Time dimension for PostgreSQL based data warehouse

    CREATE TABLE time_dim
    time_key integer NOT NULL,
    time_value character(5) NOT NULL,
    hours_24 character(2) NOT NULL,
    hours_12 character(2) NOT NULL,
    hour_minutes character (2)  NOT NULL,
    day_minutes integer NOT NULL,
    day_time_name character varying (20) NOT NULL,
    day_night character varying (20) NOT NULL,
    CONSTRAINT time_dim_pk PRIMARY KEY (time_key)
    WITH (

    COMMENT ON TABLE time_dim IS 'Time Dimension';
    COMMENT ON COLUMN time_dim.time_key IS 'Time Dimension PK';

    insert into  time_dim

    SELECT  cast(to_char(minute, 'hh24mi') as numeric) time_key,
    to_char(minute, 'hh24:mi') AS tume_value,
    -- Hour of the day (0 - 23)
    to_char(minute, 'hh24') AS hour_24,
    -- Hour of the day (0 - 11)
    to_char(minute, 'hh12') hour_12,
    -- Hour minute (0 - 59)
    to_char(minute, 'mi') hour_minutes,
    -- Minute of the day (0 - 1439)
    extract(hour FROM minute)*60 + extract(minute FROM minute) day_minutes,
    -- Names of day periods
    case when to_char(minute, 'hh24:mi') BETWEEN '06:00' AND '08:29'
    then 'Morning'
    when to_char(minute, 'hh24:mi') BETWEEN '08:30' AND '11:59'
    then 'AM'
    when to_char(minute, 'hh24:mi') BETWEEN '12:00' AND '17:59'
    then 'PM'
    when to_char(minute, 'hh24:mi') BETWEEN '18:00' AND '22:29'
    then 'Evening'
    else 'Night'
    end AS day_time_name,
    -- Indicator of day or night
    case when to_char(minute, 'hh24:mi') BETWEEN '07:00' AND '19:59' then 'Day'
    else 'Night'
    end AS day_night
    FROM (SELECT '0:00'::time + (sequence.minute || ' minutes')::interval AS minute
    FROM generate_series(0,1439) AS sequence(minute)
    GROUP BY sequence.minute
    ) DQ
    ORDER BY 1

    Based on on information provided here

  • Transform SQL Server data up to 2 times faster

    A new version of Advanced ETL Processor is available for download

    Changes are:

     + Up to 2 times faster data extraction from SQL Server
     + Up to 2 times faster data extraction from ODBC sources
     + Up to 40 percent faster loading data into SQL Server
     + Up to 40 percent faster loading data into ODBC
     + Up to 10 percent faster QVX files creation
     + Up to 10 percent faster loading data into PostgreSQL
     - Various bugs fixes and improvements

    Here are our test results: pooling data from SQL server

    VersionRecords per secondTime Taken 41,100 1min 12sec 21,436 2min 19sec

    Extract Data From SQL Server NEW Version

     Extract Data From SQL Server OLD Version


    1. The performance also depends on the hardware configuration
    2. Please use our support forum to provide us with feedback
    Direct link, no registration required.
  • Working with greenplum

    About Greenplum

    The Greenplum Database builds on the foundations of the open source database PostgreSQL. It primarily functions as a data warehouse and utilizes a shared-nothing, massively parallel processing (MPP) architecture. In this architecture, data is partitioned across multiple segment servers, and each segment owns and manages a distinct portion of the overall data; there is no disk-level sharing nor data contention among segments.

    Source: Wikipedia.

    Unlike PostgreSQL, the Greenplum database does not support the binary option of the copy command

    Select the Text Mode option to load data into Greenplum


    Direct link, no registration required.

This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies