Monday, February 26, 2007

How to do form.fill

suppose you have a form like:

form = web.form.Form(
web.form.Textbox('name', description='Name'),
web.form.Textbox('subject', description='Subject'),
web.form.Textarea('body', description='Message'),
)

You can pass a dictionary mapping values to form objects like:

form.fill(
{
'subject':'This is my subject!',
'body':t'This is the body!',
}
)

and these values will fill your textboxes.

Tuesday, February 20, 2007

X-Sendfile for large file transfers

I read Lighttpd’s weblog (Lighty’s Life) regularly and I remember Jan talking about X-Sendfile. I thought it was interesting, but never really thought about using it…. Until today!

Basically, if you have a Ruby on Rails (or other environment) page that transfers a really massive file to the client, you should use X-Sendfile.

Here is what you’ll need:

*Lighttpd Configuration*

To the FastCGI section of your lighty config, along with host, port, etc., add “allow-x-send-file” => “enable”


fastcgi.server = ( ".fcgi" =>
( "server_1" => ( "host" => "10.0.1.1", "port" => 8000, "allow-x-send-file" => "enable" )
)

Monday, February 19, 2007

Find shortest distance path postgis postgres linedata

You could consider using pgRouting (http://www.postlbs.org/) .

This extension to Postgres/PostGIS needs some topology and topology support for PostGIS is still in its infancy and barely documented… L



Now to implement ‘topology’ the *easy* way… ;-)

Use OpenJump! (http://openjump.org)

There’s a tool there called ‘Planar Graph’…

You can use it to get every line segment documented (start and end nodes for each line).

Add a ‘length’ field to your dataset.

OpenJump also has a tool to calculate areas and lengths… ;-)



Save your dataset from OpenJump into PostGIS.

The ‘length’ field acts as an initial cost for turning….



Then use the shortest_path() function from pgRouting and you’re on your way!



Also heard from the developers of pgRouting that support for turn restrictions is somewhere in the near future… ;-)



(OJ PeopleĆ  I’m posting this to the list as I think it’s useful ;-) )



HTH,

Pedro Doria Meunier

Sunday, February 18, 2007

Making all columns in a table lower case

psql -c "\d tablename" > 1.txt
get the table structure
cat 1.txt | awk '{print $1}' > 2.txt
get the 1st word (column name)
for x in `cat 2.txt`; do
echo "alter table tablename rename \"$x\" to $x;" >> 3.txt
done
build a file of sql commands to run
then, paste these into psql

import zcta zip data to postgres postgis

Often you will receive data in a non-spatial form such as comma delimited data with latitude and longitude fields. To take full advantage of PostGIS spatial abilities, you will want to create geometry fields in your new table and update that field using the longitude latitude fields you have available.

General Note: All the command statements that follow should be run from the PgAdminIII Tools - Query Tool or any other PostGreSQL Administrative tool you have available. If you are a command line freak - you can use the psql command line tool packaged with PostGreSQL.

Getting the data

For this exercise, we will use US zip code tabulation areas instead of just Boston data. The techniques here will apply to any data you get actually.

First step is to download the data from US Census. http://www.census.gov/geo/www/gazetteer/places2k.html

Importing the Data into PostGreSQL

PostGreSQL comes with a COPY function that allows you to import data from a delimited text file. Since the ZCTAs data is provided in fixed-width format, we can't import it easily without first converting it to a delimited such as the default tab-delimited format that COPY works with. Similarly for data in other formats such as DBF, you'll either want to convert it to delimited using tools such as excel, use a third party tool that will import from one format to another, or one of my favorite tools Microsoft Access that allows you to link any tables or do a straight import and export to any ODBC compliant database such as PostGreSQL.

Create the table to import to

First you will need to create the table in Postgres. You want to make sure the order of the fields is in the same order as the data you are importing.


CREATE TABLE zctas
(
state char(2),
zcta char(5),
junk varchar(100),
population_tot int8,
housing_tot int8,
water_area_meter float8,
land_area_meter float8,
water_area_mile float8,
land_area_mile float8,
latitude float8,
longitude float8
)
WITHOUT OIDS;

Convert from Fixed-width to Tab-Delimited

For this part of the exercise, I'm going to use Microsoft Excel because it has a nice wizard for dealing with fixed-width and a lot of windows users have it already. If you open the zcta file in Excel, it should launch the Text Import Wizard. MS Access has a similarly nice wizard and can deal with files larger than excels 65000 some odd limitation. Note there are trillions of ways to do this step so I'm not going to bother going over the other ways. For non-MS Office users other office suites such as Open-Office probably have similar functionality.

  1. Open the file in Excel.
  2. Import Text Wizard should launch automatically and have Fixed-Width as an option
  3. Look at the ZCTA table layout spec http://www.census.gov/geo/www/gazetteer/places2k.html#zcta and set your breakouts the same as specified. For the above I broke out the Name field further into first 5 for zcta and the rest for a junk field.
  4. Next File->Save As ->Text (Tab delimited)(*.txt) -give it name of zcta5.tab
  5. Copy the file to somewhere on your PostGreSQL server.
  6. The COPY command

    Now copy the data into the table using the COPY command. Note the Copy command works using the PostGreSQL service so the file location must be specified relative to the Server.


    COPY zctas FROM 'C:/Downloads/GISData/zcta5.tab';

    Creating and Populating the Geometry Field

    Create the Geometry Field

    To create the Geometry field, use the AddGeometryColumn opengis function. This will add a geometry field to the specified table as well as adding a record to the geometry_columns meta table and creating useful constraints on the new field. A summary of the function can be found here http://postgis.refractions.net/docs/ch06.html#id2526109.

    SELECT AddGeometryColumn( 'public', 'zctas', 'thepoint_lonlat', 4269, 'POINT', 2 );

    The above code will create a geometry column named thepoint_longlat in the table zctas that validates to make sure the inputs are 2-dimensional points in SRID 4269 (NAD83 longlat).

    Populate the Geometry Field using the Longitude and Latitude fields


    UPDATE zctas
    SET thepoint_lonlat = PointFromText('POINT(' || longitude || ' ' || latitude || ')',4269)

    The above code will generate a Text representation of a point and convert this representation to a PostGis geometry object of spatial reference SRID 4269.

    There are a couple of things I would like to point out that may not be apparently clear to people not familiar with PostGreSQL or PostGis

    • || is a string concatenator. It is actually the ANSI-standard way of concatenating strings together. In MySQL you would do this using the CONCAT function and in Microsoft SQL Server you would use +. Oracle also uses ||. So what the inner part of the code would do is to generate something that looks like POINT(-97.014256 38.959448).
    • You can't just put any arbitrary SRID in there and expect the system to magically transform to that. The SRID you specify has to be the reference system that your text representation is in.

    Transforming to Another spatial reference system

    The above is great if you want your geometry in longlat spatial reference system. In many cases, longlat is not terribly useful. For example if you want to do distance queries with your data, you don't want your distance returned back in longlat. You want it in a metric that you normally measure things in.

    In the code below, we will create a new geometry field that holds points in the WGS 84 North Meter reference system and then updates that field accordingly.


    SELECT AddGeometryColumn( 'public', 'zctas', 'thepoint_meter', 32661, 'POINT', 2 );

    UPDATE zctas
    SET thepoint_meter = transform(PointFromText('POINT(' || longitude || ' ' || latitude || ')',4269),32661) ;

    Index your spatial fields

    One of the number one reasons for poor query performance is lack of attention to indexes. Putting in an index can make as much as a 100 fold difference in query speed depending on how many records you have in the table. For large updates and imports, you should put your indexes in after the load, because while indexes help query speed, updates against indexed fields can be very slow because they need to create index records for the updated/inserted data. In the below, we will be putting in GIST indexes against our spatial fields.


    CREATE INDEX idx_zctas_thepoint_lonlat ON zctas
    USING GIST (thepoint_lonlat);

    CREATE INDEX idx_zctas_thepoint_meter ON zctas
    USING GIST (thepoint_meter);

    ALTER TABLE zctas ALTER COLUMN thepoint_meter SET NOT NULL;
    CLUSTER idx_zctas_thepoint_meter ON zctas;

    VACUUM ANALYZE zctas;

    In the above after we create the indexes, we put in a constraint to not allow nulls in the thepoint_meter field. The not null constraint is required for clustering since as of now, clustering is not allowed on gist indexes that have null values. Next we cluster on this index. Clustering basically physically reorders the table in the order of the index. In general spatial queries are much slower than attribute based queries, so if you do a fair amount of spatial queries, you get a huge gain.

    In the above we vacuum analyze the table to insure that index statistics are updated for our table.

Friday, February 16, 2007

urlparse(url, scheme='', allow_fragments=1)

this is what we should use urlparse(url, scheme='', allow_fragments=1)
Parse a URL into 6 components:
:///;?#
Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes.

Wednesday, February 14, 2007

pgsql users, and schemas

so have you discovered pgsql users, and schemas yet
psql -U postgres
psql> create user newuser password 'newuser';
psql> create schema newuser authorization newuser;
psql> \q
# psql -U newuser
psql> create table foo();
psql> \d
the table is then owned by newuser user, in its own schema (like a namespace). this is somewhat how oracle user/schema owned tables are done.
this lets you have one database and then have many users within that database, each in their own schema.
this is same effect for having a different database for each project i guess.

Adding a new id column Primary key for an existing table

10k rows. varchar types, no indexes. takes a while it seems.
ok j00 ready?
this is what i did:
alter table localeze_amacai_business add id integer;
create sequence localeze_amacai_business_id_seq;
create or replace function assign_localize_pk ()
returns integer as $_$
DECLARE
_id integer;
_count integer;
_row record;
BEGIN

for _row in select * from localeze_amacai_business LOOP
select into _id nextval('localeze_amacai_business_id_seq');
update localeze_amacai_business
set id = _id
where "PERSISTENTRECORDID" = _row."PERSISTENTRECORDID";
END LOOP;
return _count;
END;
$_$ language plpgsql;
select assign_localize_pk();
drop function assign_localize_pk();
alter table localeze_amacai_business alter id set not null;
alter table localeze_amacai_business add constraint localeze_amacai_business_pk primary key (id);



but how 2 set that sequence to this new table
11:18:30 am
Travis
so now it was built using the sequence
select last_value from localeze_amacai_business_id_seq
coup-# ;
last_value
------------
9258
oh, thats e-z
just:
alter table localeze_amacai_business alter id set default nextval('localeze_amacai_business_id_seq');
so now new inserts will invoke the sequence and you dont have to specify the id value

or you could always do
select into _id nextval('localeze_amacai_business_id_seq');
insert into. ... (id, ...) values (_id, ..)

table
i do not understand
i do not want to specify id values
11:20:33 am
Travis
i mod the existing table , added that "id" column to it
and it defaults to the sequence now
so how do you relate to the table if you dont care what its id is?

autoincrement sequence id

postgresql now has the insert into .. working
so create table2 table with all the columns that table1 (original has)
hm, actually select into needs the table to not exist
i guess it is possible in postgresql too, wher you can have the original table, and a new table with the id pk column and then make a plpgsql function that
for _row in select * from table LOOP
select into _id nextval('a_sequence');
insert into new_table(id, ....) values (_id, .....);
LOOP

strip_phone_number

create or replace function strip_phone_number(
_in varchar
) returns varchar as $_$
DECLARE
_len integer;
_i integer;
_chr varchar;
_test varchar;
_result varchar;
BEGIN

select into _len length(_in);

_i := 0;
select into _result '';
while _i <= _len LOOP
_i := _i + 1;
select into _chr substring(_in from _i for 1);
select into _test substring(_chr from '[0-9]$');
--_test := _chr;
if (_test is not null) then
select into _result _result || _test;
end if;
raise debug '%:%:%:%', _i, _chr, _test, _result;
END LOOP;

return _result;
END;

$_$ language plpgsql;

(u will need to fix the tabs thing in msn pastes)
select strip_phone_number('(123) 456-8909 x 1234');
strip_phone_number
--------------------
12345689091234
(1 row)
so that dumps the "not digit" characters from a string in pl/pgsql
but what good does that do?
dont you need the ui to undo that?

Sunday, February 11, 2007

Import TIGER database to PostGIS

wget http://www.gdal.org/dl/fwtools/FWTools-linux-1.2.0.tar.bz2
tar xjf FWTools-linux-1.2.0.tar.bz2
cd FWTools-1.2.0/
./install.sh
export LD_LIBRARY_PATH=/www/ask/work/ogr/FWTools-1.2.0/lib
export GDAL_DATA=/www/ask/work/ogr/FWTools-1.2.0/share
./bin/ogr2ogr -update -append -f "PostGreSQL" PG:"host=localhost user=postgres dbname=mydbname password=password" /www/ask/work/tiger/TGR06001.RT1 layer CompleteChain -nln masuf -a_srs "EPSG:4269"

Saturday, February 10, 2007

Installing postgis

yum install proj proj-devel

#get postgres source
cd contrib
svn co http://svn.refractions.net/postgis/trunk postgis
cd postgis
./autogen.sh
./configure --with-pgsql=/usr/local/pgsql/bin/pg_config
make && sudo make install
/usr/local/pgsql/bin/createlang plpgsql coupon
/usr/local/pgsql/bin/createlang plpgsql coupon -U postgres
/usr/local/pgsql/bin/psql -d coupon -f lwpostgis.sql -U postgres
/usr/local/pgsql/bin/psql -d coupon -f spatial_ref_sys.sql -U postgres

Vi Commands


 

Why multithreaded design was avoided

Multithreaded environments can be a headache. Experienced programmers know that and try to avoid threads, while on the other hand inexperienced programmers find them quite attractive and usually make applications a mess. It all boils down to synchronization. Synchronization of threads can be very hard to get right and is wet ground for a great number bugs to grow. Add to that, that race conditions and thread-related bugs can be extremely hard to hunt down, since the condiitons to reproduce them may be unknown. The efficiency of threads is also a concern. The scripting engine for a game must be fast. The game world contains many actors that need to be updated at least every frame. You don’t want a scheduler to take up half of your CPU trying to decide which - of many, many actors - to run next. Also, if you have to spawn and delete bullet actors in the game (coming from a fast machine gun), you should start looking for thread pools and other techniques since spawning each bullet thread can take too long.

To sum it up: below is the list of reasons that multithreaded environments where overlooked by game developers :

* Scheduling overhead
* Memory cost per thread
* Inefficient thread creation
* Synchronization problems
* More bug prune
* Difficult to debug


The main feature that makes Stackless Tasklets

The main feature that makes Stackless so attractive for use as a scripting language is the support for tasklets. Tasklets make it possible to create “micro-threads”, allowing the programmer to switch among several execution threads that only exist in the python environment and have no dependencies on the underlying OS threads. Some would call these threads”green-threads”. These threads has very small footprint on memory and CPU. You can actually create hundreds of threads with almost no overhead. Every tasklet has only a few bytes of memory overhead. And the scheduling of threads takes O(1) time with a simple Round-Robin scheduling algorithm. If we where talking about native threads we would have almost 1MB of memory per thread and high cost scheduling to do things we don’t need. To all that add that the engine would behave very differently on different operating systems. Even on different versions of the same operating system.

Coroutines vs generators

Coroutines have a completely separate stack which is saved when they yield,
so you have a load of nested function calls and yield from deep in the
middle of them.

Generators save only a single stack frame, so all yields must come directly
from the generator, not from functions which it calls.

You can use generators to get a similar effect to coroutines by nesting
generators and propogating the yields back up the chain, but this has to be
done explicitly at every level.

--
Duncan Booth duncan@rcp.co.uk

Friday, February 09, 2007

Posting Flash Videos with FFmpeg and FlowPlayer

Posting Flash Videos with FFmpeg and FlowPlayer

Anna showing on FlowPlayer Last night I have posted my very first flash video on the web — and it was Anna sitting there watching, her own video for 2 minutes (which probably would only interest the parents and grand-parents). Anna’s video aside, I was also having fun figuring out getting that video online.

There are many ways putting videos online. You can either:

1. Upload your AVI/QuickTime/WMV files onto a folder somewhere inside your hosting account.
2. Use a third party video hosting service like Google Video or YouTube.

Personally I don’t like (2). You need to upload your videos to that 3rd party, and you have little control over how the final outcome will be encoded (bit rate, frame rate, quality, etc). Moreover, there are terms and conditions that you need to read through, let along agreeing to. At the end, who owns the rights to uploaded video?

Being a control freak (well, only over the systems that I need to manage), I have always preferred option (1) by hosting video files inside my own accounts, which has some crazy amount of space and data transfer anyway. Except you don’t get that nice Flash applet which you can embed into your own pages, so visitors can and watch the video without leaving the page. They don’t need to worry about saving onto the desktop, which media player to use, whether codec has been installed, etc. They Just WorksTM — perfect for the grand-parents :)

With a bit of time wasted on research and mocking around, it turns out that you can easily achieve the effect of embedded flash video, and yet host the video files on your own server. And there’s zero penny you need to spend — all can be done via these open source software, FFmpeg and FlowPlayer.
The Basis

Here’s a summary of what needs to be done.

1. Convert the video file into a suitable format for Flash players.
2. Upload the converted file onto hosted account.
3. Upload the Flash player if hasn’t been done.
4. Paste HTML code snippet into the web page.

Flash players can only play video files encoded into the FLV (Flash Video) format, which is also the format used by Google Video and YouTube. To do so the open source way is use the universal encoder, FFmpeg.
FFmpeg

Installing FFmpeg is trivial — at least on my Gentoo boxes :) Make sure appropriate USE flags are used during emerge. For example I have:

USE="aac amr encode ogg vorbis x264 xvid zlib" emerge ffmpeg

Other Linux distribution? Not using Linux? Err. Good luck.

To convert a movie using FFmpeg, do the following:

$ ffmpeg -i movie.avi movie.flv

It will then convert the AVI file into FLV Flash Video. FFmpeg can also handle many different container types, for example QuickTime, WMV1 (not WMV3 at the moment), MPEG4, etc, so just throw the video at it and see whether it handles it.

There are many command line options that you can use to alter the encoding behaviour. For example if I wish to rescale the movie to 320×240, with 15 frame/sec, at video at 250kbps and audio down-sampling to 22,050Hz at 48kbps, I just tell FFmpeg to do it on the command line:

$ ffmpeg -i movie.avi -s 320x240 -r 15 -b 250 -ar 22050 -ab 48 movie.flv

There are many more options so do check out their manual if you are interested.

There is another thing that we need to do — create a JPEG thumbnail for previewing. This will be displayed in the otherwise empty canvas of the flash player, before [Play] is pressed. For convenience sake, we’ll take the very first frame of the video.

$ ffmpeg -i movie.avi -f mjpeg -t 0.001 movie.jpg

FLVTool2

FLVTool2 is needed to calculate and update meta data in the FLV file. Well, you don’t really need it as you can already play the FLV file spill out from FFmpeg, but because of the missing info, Flash player cannot show the buffering status and current playing position, etc.

I was hesitated to install FLVTool2 because (1) it depends on Ruby which I need to emerge (2) it does not have an ebuild for it. But anyway, having it running is still trivial.

1. Make sure you already have Ruby installed.
2. Download the latest FLVTool2
3. Unpack the tarball, change into its directory, and run ruby setup.rb all as root.

Now just run

$ flvtool2 -U movie.flv

Well, installation is actually optional. You can pretty much run FLVTool2 from inside its unpacked directory, for example.

$ RUBYLIB=lib ruby bin/flvtool2 -U /movie.flv

Your FLV is ready to go! Upload both FLV and generated JPEG thumbnail onto your web hosting account. Make sure they are in the same folder.
FlowPlayer

FlowPlayer is an open source Flash video player that is light-weight (at around 22kb), and pretty easy to configure. Download the latest version from SourceForge.

Unpack the ZIP will give you the player file FlowPlayer.swf. Upload it somewhere on your website.

Now you need to cut and paste this HTML code snippet onto the web page you wish to show the video:










[your site] is the URL to where you keep the FlowPlayer.swf. [base URL] is the directory where you keep the FLV and JPEG files. For example, the final URL to FLV file will be [base URL]/movie.flv.

Paste that onto your website, or into your blog post, and check whether it works!

Please check FlowPlayer documentation on the options going to flashvars.
Conclusion

In fact those steps can be easily automated with a bit of scripting. I shall be posting more movies on Anna’s website.

If your hosting companies are not very generous quota (i.e. small timers who can’t really oversell), or if you think your video will get digged and slashdotted and become overnight hit, then maybe having Google Video or YouTube to host for you is a wiser idea, just in case a huge hosting bill landing on your credit card statement.

Otherwise, you might choose to host those videos on your own account, and regain a bit of control.

Monday, February 05, 2007

scaling rails

I've said it before, but it bears repeating: There's nothing interesting about how Ruby on Rails scales. We've gone the easy route and merely followed what makes Yahoo!, LiveJournal, and other high-profile LAMP stacks scale high and mighty.

Take state out of the application servers and push it to database/memcached/shared network drive (that's the whole Shared Nothing thang). Use load balancers between your tiers, so you have load balancers -> web servers -> load balancers -> app servers -> load balancers -> database/memcached/shared network drive servers. (Past the entry point, load balancers can just be software, like haproxy).

In a setup like that, you can add almost any number of web and app servers without changing a thing.

Scaling the database is the "hard part", but still a solved problem. Once you get beyond what can be easily managed by a descent master-slave setup (and that'll probably take millions and millions of pageviews per day), you start doing partitioning.

Users 1-100K on cluster A, 100K-200K on cluster B, and so on. But again, this is nothing new. LiveJournal scales like that. I hear eBay too. And probably everyone else that has to deal with huge numbers.

So the scaling part is solved. What's left is judging whether the economics of it are sensible to you. And that's really a performance issue, not a scalability one.

If your app server costs $500 per month (like our dual xeons does) and can drive 30 requests/second on Rails and 60 requests/second on Java/PHP/.NET/whatever (these are totally arbitrary numbers pulled out of my...), then you're faced with the cost of $500 for 2.6 million requests/day on the Rails setup and $250 for the same on the other one.

Now. How much is productivity worth to you? Let's just take a $60K/year programmer. That's $5K/month. If you need to handle 5 million requests/day, your programmer needs to be 10% more productive on Rails to make it even. If he's 15% more productive, you're up $250. And this is not even considering the joy and happiness programmers derive from working with more productive tools (nor that people have claimed to be many times more productive).

Of course, the silly math above hinges on the assumption that the whatever stack is twice as fast as Rails. That’s a very big if. And totally dependent on the application, the people, and so on. Some have found Rails to be as fast or faster than comparable “best-of-breed J2EE stacks” — see http://weblog.rubyonrails.com/archives/2005/04/04/justingehtland-is-back-with-numbers-to-back-it-up/

The point is that the cost per request is plummeting, but the cost of programming is not. Thus, we have to find ways to trade efficiency in the runtime for efficiency in the “thought time” in order to make the development of applications cheaper. I believed we’ve long since entered an age where simplicity of development and maintenance is where the real value lies.
David Heinemeier Hansson
Tuesday, July 12, 2005

PSQL

psql is so good
hey
in psql
there is a \h
which gives you sql query help,
so \h select
tehn as you are typing things out, hitting tab gives you options for what to use next sometimes
try typing
alter table [tab]
then it displays list of tables it can see.
then there is \? which lists other meta commands
like \dt shows tables
\l lists databases \
\dn lists schemas
\du lists users
well, for your setup mostly you have 1 users, one schema, and couple databases
when you are connected to one database in psql, \c newdbname
chages databases
after a few days you get used to the auto-complete features, and the \h things to help you, it feels like a gui sort of, but much much faster than pgadmin
though, i got into postgresql after being forced to work with oracle (yuk)
the only oracle gui at the time was toad, which is by Que$t $oftware.
and we never could afford to buy it
so we learned the oracle meta database,
which is oddly enough, tables and a database, to describe the database
postgresql has this too
the pg_catalog, where there are what looks like tables, to describe our user databases, schemas, tables, and our database objects.
and the \d commands in psql sort of are short cuts for this
but you can also do select .. from pg_* tables, and that gives us information on table features, and the columns, which is what pgadmin is doing behind the scenes for us to display their things all nicely formatted.
mysql, on the otherhand, does not have a meta database, but then what do you expect from junk :)
in version 3.3 the "show databases" command actually invoked a system command to "ls" (list files) in the mysql directory, since mysql used to (still does?) create databases as directories on the file system.
this has the horrible side effect of making table names Case Sensitive, which violates the SQL standard (lol, and mysql claimes to be sql compliant, but cant even get case insensitive table names :!)
we discovered that one time the hard way by migrating a mysql on windows app to mysql on unix, and of course developers will make code in different spots like
select * from MyTable
select * from mytable
select * from MYTABLE
etc

Sunday, February 04, 2007

python handy debug error tip

Here's a handy way to make debugging your web.py scripts a little
easier. Just add this to your main script (before you do web.run()):

def error():
if web.webapi.ctx.ip == '': web.debugerror()
else: origerror()
origerror = web.webapi.internalerror
web.webapi.internalerror = error

Add your IP address where it says. This will show detailed debug
output if an exception occurs, but only when the request is from your
IP address. Anyone else will get the usual "internal server error"
message. This is a convenient way to make your web.py app securely
debuggable without having to manually switch back and forth between
debug/deploy modes every time you want to make a change.

Cheetah base templates

You can define a base class:

class base:
def __init__(self):
web._compiletemplate('default.html', base='base')

Use it:

class page1(base):
def GET(self):
web.render('page1.html')


Your templates:

#DEFAULT.HTML
< html >

< body >
#block content
CONTENT
#end block
< / body >
< / html >

#PAGE1.HTML
#extends base

#def content

PAGE 1


You're on the 1st page.
#end def

#PAGE2.HTML
#extends base

#def content

PAGE 2


You're on the 2nd page.
#end def

Saturday, February 03, 2007

Ssh keys

you need to edit /etc/ssh/sshd_config and disable password auth

but before that, you need to make sure you have the ssh keys set up, and the authorized_keys entry, and the directory permissions.

i usually just
ssh-keygen -t dsa -b 1024
that creates
$HOME/.ssh/id_dsa
$HOME/.ssh/id_dsa.pub
then cat $HOME/.ssh/id_dsa.pub >> $HOME/.ssh/authorized_keys
then chmod -R 700 $HOME/.ssh
then edit /etc/ssh/sshd_config to set password auth = no,
or something.

but the quickest defencse is to edit the sshd startup script and add
-p 1234
or some other not commonly thought of port

so then it can work as it is, but you just
ssh -p 1234 yourbox.
when ssh keys are working properly, you should be able to ssh without entering a password.

so your system would have the private, public keys, and the remote web server only needs to have the entry of the id_dsa.pub appended to authorized_keys

Do not forget to check for NULL [NULL + int in postgres results in NULL]

-- Function: update_total_votes_pictures()

-- DROP FUNCTION update_total_votes_pictures();

CREATE OR REPLACE FUNCTION update_total_votes_pictures()
RETURNS int4 AS
$BODY$ -- returns the number of pixpair entries that were created
DECLARE
_count integer;
all RECORD;
_total_a integer;
_total_b integer;
_total integer;
BEGIN

_count := 0;

FOR all in
select id from pictures
LOOP
select into _total_a sum(pic1_votes) from pixpair where pic1_id = all.id;
if ( _total_a is NULL ) then
_total_a := 0;
end if;
select into _total_b sum(pic2_votes) from pixpair where pic2_id = all.id;
if ( _total_b is NULL ) then
_total_b := 0;
end if;
_total := _total_a + _total_b;
update pictures set total_votes = _total where id = all.id;

_count := _count + 1;
END LOOP;

return _count;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
ALTER FUNCTION update_total_votes_pictures() OWNER TO postgres;