Help - Search - Member List - Calendar
Full Version: combinations
WorkTheWeb Forums > Webmaster Resources > Perl Beginner Help
Pages: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44
Support our Sponsors!
Mr. John \ Kent
Thank you Jeff,

Very nice.

I will give it a try.
(In some cases I know the values will be digits).

John Kent

-----Original Message-----
From: Jeff 'japhy' Pinyan [mailto:[Email Removed]]
Sent: Saturday, July 17, 2004 10:34 AM
To: Kent, Mr. John (Contractor)
Cc: [Email Removed]
Subject: Re: Efficient Untaint?


On Jul 17, Kent, Mr. John (Contractor) said:

QUOTE
Is there a more efficient/better way to untaint variables
pulled from a cgi query object?

I'd make an untaint function that took the param() name, a regex to use,
and a default value to use.

sub untaint {
my ($name, $rx, $default) = @_;
my $ok = $query->param($name) =~ $rx ? $1 : $default;
$query->param($name, $ok);
}

You use it like so:

my $MOSIAC_SCALE = untaint('MOSIAC_SCALE', qr/(d+)/, 20);
# etc.

As for your code:

QUOTE
my($MOSAIC_SCALE)    = $query->param('MOSAIC_SCALE')    || "20";
{$MOSAIC_SCALE =~ /(d+)/;
$MOSAIC_SCALE = $1;

You should *never* use $DIGIT variables after a regex unless you're sure
the regex *matched*.

--
Jeff "japhy" Pinyan % How can we ever be the sold short or
RPI Acacia Brother #734 % the cheated, we who for every service
http://japhy.perlmonk.org/ % have long ago been overpaid?
http://www.perlmonks.org/ % -- Meister Eckhart

Mr. John \ Kent
Gunnar,

Thank you. Excellent suggestion.
Undoubtedly I've gota lota unnecessary
untaintin' goin' on!

Thanks,
John Kent

-----Original Message-----
From: Gunnar Hjalmarsson [mailto:[Email Removed]]
Sent: Saturday, July 17, 2004 11:37 AM
To: [Email Removed]
Subject: Re: Efficient Untaint?


Mr. John Kent wrote:
QUOTE
Is there a more efficient/better way to untaint variables
pulled from a cgi query object?

Here is an example of what I am currently doing:

#!/usr/bin/perl -wT
use strict;
use CGI;
my($query) = new CGI;

# I then have 30  untaint checks like this before I start
# coding.

Do all the 30 parameters need to be validated in the form of
untainting? For params that will not be used directly in system
operations, you may want to consider something simpler.

Personally I like to populate a hash with the CGI input, and assuming
that has been done, and that you don't need to reassign the parameters
in the CGI object, you could for instance do:

$in{MOSAIC_SCALE} =~ /^d+$/ or $in{MOSAIC_SCALE} = 20;

or even just:

$in{MOSAIC_SCALE} ||= 20;

For params that need untainting, I like Jeff's suggestion.

--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl

--
To unsubscribe, e-mail: [Email Removed]
For additional commands, e-mail: [Email Removed]
<http://learn.perl.org/> <http://learn.perl.org/first-response>

Rob Hanson
QUOTE
I still don't know how to declare arrays using only '$' instead of '@'

You can't. But you can store a *reference* to an array in a scalar.

This will work:

# the backslash ("") returns a reference to the
# variable, so this doesn't actually pass the array,
# it passes a reference (pointer sort of) to the array.
goodsub(@array, $scalar);

sub goodsub()

{
my ($array_ref,$scalar) = @_;

# turns the ref back to an actual array.
my @array = @{$array_ref};

# or use the array directly through the ref.
# note that changes made through a ref will change
# the original array.
print $array_ref->[0];
}

QUOTE
Is it possible to write scripts using only '$'
instead of other prefix symbols?

No, not the way you intend. You could use only references, but that
wouldn't make sense.

Rob

-----Original Message-----
From: gohaku [mailto:[Email Removed]]
Sent: Sunday, July 18, 2004 8:59 PM
To: Perl Beginners
Subject: Another Perl datatype headache ( scalars $, hashes %, and
arrays @ )


Hi everyone,
after writing perl scripts for about 3 years now, I still have trouble
with the
basic datatypes.
I know that variables that start with '$' are scalars.
This covers Hashes ($HASH{$key}), Arrays ( $_[0] ), and
regular scalar values ( $foobar );

The code I write as well other's code is still unreadable to me
even though I have followed examples from the
Camel book, many other Perl books from O'Reilly and online references.
I have also used perldoc on many occasions.

There are still some things that haven't sunk in, such as:

If I want to add Hash keys to another Hash, I do the following:

%HASH = ( 1 => 'one' ); #NO BRACES OR ELSE....
%HASH2 = ( 2 => 'two' ); AGAIN, NO BRACES OR ELSE...
@HASH2{ keys %HASH } = "";
#confusing, considering it's the symbol used for arrays

To get the length of an array, it's $#array, not #@array or #$array.
Usually, I use scalar @array;

Problems with subroutines where the array is the first argument
sub badsub()

{

my (@array,$scalar) = @_;
#Pass Array last!
#my ($scalar,@array) = @_:
...
}

I still don't know how to declare arrays using only '$' instead of '@'

anyway, Is it possible to write scripts using only '$' instead of other
prefix symbols?
In other words, a php-style script written in perl

Thanks in advance.
-gohaku


--
To unsubscribe, e-mail: [Email Removed]
For additional commands, e-mail: [Email Removed]
<http://learn.perl.org/> <http://learn.perl.org/first-response>

John Moon
Dear sir,


I have two array variables. I want to find $a[1] and replace $b[1] in a
file.
($a[1],$b[1] are array variables)
How to find and replace the variable contents.

for ($i=0;$i<$totalnum;$i++){
s/$a[i]/$b[i]/g;
}

Is it possible to do search and replace or kindly suggest me an idea


Thanking you

Regards
Baskaran NK



Please give us a little more information...

Is the record "fixed length", "packed", "delimited", or "other"?

What is the size of the file?

Is the occurrence of the value $a[i] only once per record?

Maybe a few lines of the file would help us...

jwm

Prasanna Kothari
Hi,
For replacing the contents of an array :
This code snippet replaces the string "MODIFIED" by "MOD"
foreach(@arr) {
s/MODIFIED/MOD/g;
}
foreach(@arr) {
print "$_n";
}

This is a round about way of assigning values of one array to another.
#!/usr/bin/perl -w
@arr1 = ("This","is","something","cool");
@arr2 = ("That","was","nothing","cool");

for($i=0;$i<scalar($#arr1);$i++) {
$arr1[$i]=~s/$arr1[$i]/$arr2[$i]/g;
}
foreach(@arr1) {
print "$_n";
}
Is this what you are looking at?
Am I missing something here?
--Prasanna

Moon, John wrote:

QUOTE
Dear sir,


I have two array variables. I want to find $a[1] and replace $b[1] in a
file.
($a[1],$b[1] are array variables)
How to find and replace the variable contents.

for ($i=0;$i<$totalnum;$i++){
s/$a[i]/$b[i]/g;
}

Is it possible to do search and replace or kindly suggest me an idea


Thanking you

Regards
Baskaran NK



Please give us a little more information...

Is the record "fixed length", "packed", "delimited", or "other"?

What is the size of the file?

Is the occurrence of the value $a[i] only once per record?

Maybe a few lines of the file would help us...

jwm




Sherm Pendley
krakle wrote:

QUOTE
And I stop your post here. You summed it up...

Yes I did, and I'll do so again. You said that Vars() must be called in
list context, not scalar context. That is false, and the fact that it is
false is clearly documented in 'perldoc CGI' - a document you clearly
are not familiar with.

Further, you claimed that the OP called Vars() as a function, but the
code he posted read 'my $params = $q->Vars()' - clearly you don't know
the difference between a function and a method.

Mark asked for help. The error he received is "Undefined subroutine";
calling Vars() in scalar context will not produce that error, and
neither will calling Vars() as a method when it has also been imported
as a function.

Calling Vars() as a function *will* produce that error, if it has not
been imported as a function. But in the code Mark posted, it *is*
imported; not only that, it's called as a method. Doing both is wrong
only in terms of style - it won't cause an error.

You were wrong on at least one point, and overall your suggestions were
useless. Calling me rude won't change those facts.

sherm--

--
Cocoa programming in Perl: http://camelbones.sourceforge.net
Hire me! My resume: http://www.dot-app.org

Mark
"Sherm Pendley" <[Email Removed]> wrote in message
news:[Email Removed]...
QUOTE
krakle wrote:

I pointed out exactly what
he did wrong... Using OO while CGI is called as function.

In the post I replied to, the only thing you "pointed out" was this:

$params = $q->Vars;


my %params = $q->Vars;


The "error" that you're "correcting" above is not an error, nor does it
have anything to do with methods vs. functions. Calling Vars() in scalar
context is allowed, regardless of whether you call it as a method or as
a function.


Thanks for posting your advice which I have taken on board. Much of the
comments are moot now anyway as I emailed my host support who told me they
don't even have CGI::Vars installed anyway and won't do it (cheapskates!).
So all the code posted probably *should* work if that were not the case.
Hence the error message and my earlier suspicions were correct. I've opted
for changing the way I post my variables from the PHP server so I don't need
to use Vars just param().

[snip]

QUOTE
So drop the snide attitude and whiny "Geesh" comments please.

I totally agree with you. I wish some people would try to remember what it
was like when *they* first started learning to code. It's this kind of
attitude that puts people off Usenet and totally negates any helpfulness of
the comments being offered. I tend to avoid people who post comments like
this as they often just need their egos massaged by being able to patronise
newbies. Thankfully there are people like yourself who are genuinely trying
to help people starting out in code and not just show-boating.

Thanks.
Mark

Jose Alves De Castro
On Mon, 2004-08-02 at 14:52, Prasanna Kothari wrote:
QUOTE
Hi,

Hi

QUOTE
[... code ...]

I have tried your code and it works...

QUOTE
Notice that the code snippet is executing vi with a file and hence
editor opens up the file. After typing if I give a wrong vi command for
eg: "W" instead of "w"; $val gets a value greater than zero(256) and

This does not happen to me :-|

QUOTE
hence the else part gets executed. The number of times I give a wrong vi
command , the return value gets incremented by 256.
This snippet is part of a larger program, which opens a editor and asks
the user to type in, after the user saves the file contents and exits
out of the editor, the program  reads the contents of the file for
further processing.

Is there a way to overcome this situation?

Are you sure the problem is in that piece of code? Have you tried
separating it from the rest of code and test it alone? Because that's
what I've done and it works... $val has always been 0 :-|

QUOTE
Thanks in advance

HTH,

jac

QUOTE
Prasanna





--

Jos Alves de Castro <[Email Removed]>
http://natura.di.uminho.pt/~jac

Have them pass the arrays as a reference. For example:

@array1 = (1, 2, 3, 4, 5);
@array2 = (6, 7, 8, 9, 10);
mysub(@array1,@array2);

sub mysub{

my ($array1, $array2) = @_;
#process @{$array1}
#process @{$array2} etc
return @array3;
}

Look into perldoc perlref

Prototyping is another option, not entirely recommended unless it's entirely
needed. perdoc perlsub for more info.

--
-will
http://www.wgunther.tk
(the above message is double rot13 encoded for security reasons)

Most Useful Perl Modules
-strict
-warnings
-Devel::DProf
-Benchmark
-B::Deparse
-Data::Dumper
-Clone
-Perl::Tidy
-Beautifier
-DBD::SQLite

Charles K. Clarkson
From: Mark Cohen <mailto:[Email Removed]> wrote:

: Hello ,
:
: I have a transferred a file from an IBM mainframe
: to a windows platform that I need to analyse. The
: file contains an 8 byte floating point hexadecimal
: representaion 44FE880000000000.
:
: This should be converted to the number 65160.

When I use this sub I get 1.21711040165713e-008
not 65160.

print floatmvs( '44FE880000000000' );


: sub floatmvs {
: my $mat=0;
: my $firstbyte = unpack "H2", $_[0];
: my $exp=$firstbyte-40; # base 16
: my $bin=unpack('B*',substr($_[0],1,7));
: for ($start=0; $start <56; $start+=1) {
: $bit=substr($bin,$start,1);
: $bitpos=$start+1;
: if ($bit == 1) {
: $val=(1/2)**($bitpos);
: $mat=$mat+$val;
: }
: }
: my $num=$mat*(16**$exp);
: return $num;
: }

With 'strict' and 'warnings' turned on, I get
the same result with this.

use strict;
use warnings;

print floatmvs2( '44FE880000000000' );

sub floatmvs2 {
my @bits = split //, unpack 'B*', substr( $_[0], 1, 7 );

my $mat = 0;
foreach my $pos ( 0 .. $#bits ) {
$mat += $bits[ $pos ] * ( 1 / 2 ) ** ( $pos + 1 );
}

my $exp = unpack( 'H2', $_[0] ) - 40;
return $mat * ( 16 ** $exp );
}


HTH,

Charles K. Clarkson
--
Mobile Homes Specialist


Paul Harwood
The table is fairly complicated. I'll take a look at those modules
though. Thanks!

-----Original Message-----
From: Chris Devers [mailto:[Email Removed]]
Posted At: Wednesday, August 04, 2004 5:03 PM
Posted To: Perl
Conversation: Sorting HTML tables
Subject: Re: Sorting HTML tables

On Wed, 4 Aug 2004, Perl wrote:

QUOTE
I wrote some code to identify and print HTML tables below

Don't do that.

HTML is tremendously difficult to analyze properly with tools like
regular expressions.

You're much, much better off using a proper parser library that can
build up a tree model of the html that you can analyze as you like.

The standard libraries for this are probably HTML::Parser and
HTML::Treebuilder. You may also like HTML::TableContentParser.

<http://search.cpan.org/~gaas/HTML-Parser-3.36/Parser.pm>
<http://search.cpan.org/~sburke/HTML-Tree-3.18/lib/HTML/TreeBuilder.pm>
<http://search.cpan.org/~sdrabble/HTML-TableContentParser-0.13/TableCont
entParser.pm>

This may point you in a useful direction:

use HTML::TableContentParser;
$p = HTML::TableContentParser->new();
$html = read_html_from_somewhere();
$tables = $p->parse($html);
for $t (@$tables) {
for $r (@{$t->{rows}}) {
print "Row: ";
for $c (@{$r->{cells}}) {
print "[$c->{data}] ";
}
print "n";
}
}

Something like this should work even for godawful ms-html :-)

QUOTE
The problem I am stuck with is that now I want to sort the tables
based on a Priority (which range from 1-3). There may be several
tables with the same priority numbers.  An example of a Priority 3
would be:

# extraordinarily ugly html omitted

I need help in understanding the methodology in how to extract these 2

items and then sort the tables in Priority order (all the 1's, 2's and

3's).

It looks like HTML::TableContentParser makes sorting through the
structure of the table pretty easy; HTML::Parser could go farther by
reducing it down to just the printable text -- some combination of the
two may be useful here.

Once you've stripped out all the junk (all the span tags, the paragraph
tags, the "<o:p></o:p>" type debris, etc), you just need to do convert
the html structure into some kind of populated data structure.

You didn't give enough of the html to suggest what the rest of the table

is structured like -- it was really just one big hairy table cell -- so
it's hard to guess how the other pieces fit together.

Can you post a simpler example of what the table is built like, e.g.:

+------------+-------+---------------+----------------+
| priority 1 | field | another field | some more |
+------------+-------+---------------+----------------+
| priority 3 | field | any data here | other things |
+------------+-------+---------------+----------------+
| priority 2 | field | stuff stuff | whatever |
+------------+-------+---------------+----------------+

Or is it more complcated than that?



--
Chris Devers [Email Removed]
http://devers.homeip.net:8080/blog/

np: 'Lujon'
by Henry Mancini
from 'The Best Of Mancini'

--
To unsubscribe, e-mail: [Email Removed]
For additional commands, e-mail: [Email Removed]
<http://learn.perl.org/> <http://learn.perl.org/first-response>

Gunnar Hjalmarsson
Chris Mortimore wrote:
QUOTE
Gunnar Hjalmarsson wrote:
Chris Mortimore wrote:
I want to sort an AoH.  Not each hash by its keys, but the
array by the value of one of the keys in each hash.

The value of one of the keys? If you don't know *which* key in
respective hash, this appears to be pretty tricky...

Of course I know _which_ key.  Each hash has a key "date_tm", I
want to sort all the hashes in the array by their date_tm value
which is in the format: yyyymmdddhhmm.

Aha, the keys have the same name.. Good! Then Randy's suggested code
should do.

As regards documentation, besides "perldoc -f sort", there is a FAQ
entry that is very much applicable to this problem: "How do I sort an
array by (anything)?"

--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl

Randy W. Sims
Christopher J. Bottaro wrote:
QUOTE
is there a way to iterate over the fields of a Class::Struct (in the order
they were declared)?

No. You could store the data used to generate the struct, and then use
it later.

my @struct_data = [
key1 => 'type',
key2 => 'type',
];

struct Name => @struct_data;

then iterate of @struct_data...

QUOTE

yeah i know its a long shot, but perl sometimes does things for me that i
never would have believed...much less knew to expect...;)

also, i know you can do this with hashes (although in "random" order, unless
you sort the keys), but hashes are too error prone for my liking.  i.e.
$my_hash{TYPO} = $blah; does not result in an error...=(

If you're using version 5.8 or later you can use restricted hashes. See
`perldoc Hash::Util`

Randy.

Gunnar Hjalmarsson
Edward Wijaya wrote:
QUOTE
Thanks so much for your reply Gunnar,

The purpose is as follows.

For example these lines:

AGCGGGGAG,AGCGGGGCG,AGCCGGGCG,AGCCAGGAG 15.
AGCGGAGCG,AGCCGAGGG,AGCGGAGGG          16.
_____________________________________/ _____________/
@Array1                              $Key1

Is that an array with 7 elements?

No. They are 2 arrays each with 4 elemeents and 3 elements. For
this I want to store them in hash of array.

What do you mean by the scalar variable $Key1 that points to 2
numbers?

What I mean by scalar of variable is : $Key2=scalar(@Array1) i.e
the number of elements of that array.

So $Key2 for line1 = 4,
and $Key2 for line2 = 3

I want to sort the hash based on this value as well as $Key1.

Okay. If I understand you correctly, you don't need any additional
keys to be able to sort by number of elements in the arrays, since
that info is still conveniently available.

my %HoA = (
'15.' =>
[ 'AGCGGGGAG','AGCGGGGCG','AGCCGGGCG','AGCCAGGAG' ],
'16.' =>
[ 'AGCGGAGCG','AGCCGAGGG','AGCGGAGGG' ],
);

print "Sorted by keysn";
for ( sort { $a <=> $b } keys %HoA ) {
print "$_: @{ $HoA{$_} }n";
}

print "n";

print "Sorted by number of elementsn";
for ( sort { @{ $HoA{$a} } <=> @{ $HoA{$b} } } keys %HoA ) {
print "$_: @{ $HoA{$_} }n";
}

--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl

Jose Nyimi
QUOTE
-----Message d'origine-----
De: Randy W. Sims [mailto:[Email Removed]]
Envoy: samedi 7 aot 2004 02:50
: [Email Removed]
Cc: [Email Removed]
Objet: Re: iterate over the fields of a struct?

Christopher J. Bottaro wrote:
is there a way to iterate over the fields of a Class::Struct (in the
order
they were declared)?

No. You could store the data used to generate the struct, and then use
it later.

my @struct_data = [
key1 => 'type',
key2 => 'type',
];

struct Name => @struct_data;

then iterate of @struct_data...


did you meant:

my $struct_data = [ #array_ref
key1 => 'type',
key2 => 'type',
];

struct Name => $struct_data;
then iterate of @$struct_data ...

?

array = array_ref is confusing to me ...
could you explain please ?

QUOTE

yeah i know its a long shot, but perl sometimes does things for me
that
i
never would have believed...much less knew to expect...;)

also, i know you can do this with hashes (although in "random"
order,
unless
you sort the keys), but hashes are too error prone for my liking.
i.e.
$my_hash{TYPO} = $blah; does not result in an error...=(

If you're using version 5.8 or later you can use restricted hashes.
See
`perldoc Hash::Util`


helpfull module !
Hash vivification is a nightmare, really ...

Jos.

Gunnar Hjalmarsson
Edward Wijaya wrote:
QUOTE

my @AoH = (
{ values => ['AGCGGGGAG','AGCGGGGCG','AGCCGGGCG','AGCCAGGAG'] },
{ values => ['AGCGGAGCG','AGCCGAGGG','AGCGGAGGG'] },
);

for ( 0..$#AoH ) {
$AoH[$_]->{ic} = compute_ic( @{ $AoH[$_]->{values} } );
}

print Dumper @AoH;

Thanks Gunnar,
I managed to construct the Array of Hashes (@AoH) - Glad I did
that! Now, I don't have the clue of how to sort these hashes,
according to IC value and No of elements.

What have you done to find out?

perldoc -f sort
perldoc -q "sort an array"

I think you also need to read up on data structures:

perldoc perldsc

This is one suggestion:

print "Sorted by ic valuen";
for my $hashref ( sort { $a->{ic} <=> $b->{ic} } @AoH ) {
print "$hashref->{ic}: @{ $hashref->{values} }n";
}

print "n";

print "Sorted by number of elementsn";
for my $hashref ( sort {
@{ $a->{values} } <=> @{ $b->{values} }
} @AoH ) {
print "$hashref->{ic}: @{ $hashref->{values} }n";
}

--
Gunnar Hjalmarsson
Email: http://www.gunnar.cc/cgi-bin/contact.pl

Marcos Rebelo
QUOTE
-----Original Message-----
From: Singh, Harjit [mailto:[Email Removed]]
Sent: Monday, August 09, 2004 4:05 PM
To: [Email Removed]
Subject: Trying To write a script


I am trying to write a script that would be able to read a file.  The
file is broken into number of segments and each segment starts with a
similar string pattern of following type:  2.2.x.y.z: followed with
white space, where x, y, z numbers change throughout the file. The
segment further has number of things that I am looking for.  I want to
be able to capture the segment value in addition to other things in a
specific segment.  What is the best approach to be able to make this
possible?  I have tried number of things but have not been able to
capture the right regular expression to capture the information.  I
would appreciate if any one can send in their response...

Regards,
Harjit Singh


Can you send a file example.

Don't considering the file size.

use Data::Dumper;
my @file = split(/(2.2./d+./d+./d+) /, $fileText);
shift(@file);
my %blocks = @file;
print(Dumper(%blocks));

In blocks you would find the keys read to be parsed and the values as being
the rest.

One real example cloud help.

Marcos

Chris Devers
On Mon, 9 Aug 2004, SilverFox wrote:

QUOTE
Example:
user enter: 59443
Script will output: 58M

I know this isn't getting into the spirit of things, but have you
considered simply using the `units` program?

% units
500 units, 54 prefixes
You have: 59443 bytes
You want: megabytes
* 0.056689262
/ 17.640025
You have: 59443 bytes
You want: kilobytes
* 59.443
/ 0.016822839
You have: ^C
% units bytes kilobytes
* 0.001
/ 1000

% units bytes megabytes
* 9.5367432e-07
/ 1048576

The nice thing about `units` -- in this context -- is that it lets the
user pick the conversion units they want to work with, and also gives
hints for converting both to & from the alternate measurement scale.

Of course, working this into a larger program that does other things
might be annoying -- in which case your way is better -- but if all you
want is the conversions, this is a solved problem :-)


--
Chris Devers

David Dorward
On 9 Aug 2004, at 14:34, SilverFox wrote:

QUOTE
Hi all, I'm trying to writing a script that will allow a user to enter
a
number and that number will be converted into KB,MB or GB depending on
the
size of the number. Can someone point me in the right direction?

What have you got so far? Where are you stuck? Getting user input
(where from)? Working out which order of magnitude the number is?
Converting between kilo and mega et al? Showing the output?

Show us some code.

--
David Dorward
<http://dorward.me.uk/>
<http://blog.dorward.me.uk/>

Christopher J. Bottaro
Randy W. Sims wrote:
QUOTE
If you're using version 5.8 or later you can use restricted hashes. See
`perldoc Hash::Util`

heh, that was exactly what i'm looking for, thanks. ugh, now i gotta
rewrite 3 days worth of code with restricted hashes instead of
Class::Struct ...=/

seane
I have this code and receive the message "Can't modify subroutine
entry in scalar assignment at script.pl line 165, near "$_;" " when
running on perl, version 5.005_03 but not when running on perl version
v5.6.1, I know the easiest answer would be to upgrade the older verson
of perl but I have NO control over that.

Is there a change I can make for this to work on 5.005.03 as well.

I appreciate your help:

$conmsg=("Agent is now connected");
$disconmsg=("Agent is now disconnected");

foreach (@logarray)
{
our $msg = $_; #this is line 165.
if ($msg=~ "$conmsg")
{
$flag="TRUE"
}
if ($msg=~ "$disconmsg")
{
$flag="FALSE"
}
}
if ($flag eq "FALSE")
{ system("echo Please check $filein. The following message
was found: $disconmsg. |mail -s "$0 - $sysName"
[Email Removed]");
print"An error was found, an email has been sent. n";
exit(1);
}

[Email Removed] (seane) wrote in message news:<[Email Removed]>...
QUOTE
OK you ever spell check a document and you are so far off that spell
check has no idea what you are trying to spell????

That's what I feel like here.. Am I that far off base?
What I have here works it's just not exactly what I want...

At the end of the script running; I basically want to know if the
agent is still connected or has it disconnected, sometimes it will
drop connection but then reconnect on it's own and each event will
write to the log.

Any pointers/suggestions would be appreciated.




[Email Removed] (seane) wrote in message news:<[Email Removed]>...
I read this log file twice a day using cron. Normally I should
see one "connected" message when everything is working and if so exit
the script.  If not send an email to myself and then exit. Sometimes I
get a following disconnected message and I get sent an email if that
occurs. However sometimes I get a following connected message after
the first disconnected message then I am still ok. How can I read this
log and only get sent an email if the disconnected message is the last
string found or if no connected string was found at all?


I have a log file containing:

Fri Jul 23 13:53:54 2004: Agent is now connected.
Fri Jul 23 13:54:54 2004: Agent is now disconnected.
Fri Jul 23 13:55:54 2004: Agent is now connected.
Fri Jul 23 13:56:54 2004: Agent is now disconnected.

I have this code:

#!/usr/bin/perl -w
$filein=("logfile");
open (LOG, "<$filein");
@logarray=<LOG>;
###########################################################################
print"starting the log search for connectednn";
###########################################################################
$lookfor=("Agent is now connected");

@match=grep{/$lookfor/}@logarray;
if (@match)    {
foreach(@match)
{
print ;
}
}
else
{
print"$lookfor was not found, an email has been sent.nn";
# here is where I send myself the email- same as below.
exit(1);
}
print" Going to next stepnn";

###########################################################################
print"STARTING THE SEARCH FOR DISCONNECTED MESSAGEnn";
###########################################################################
$lookfor=("Agent is now disconnected");

@match=grep{/$lookfor/}@logarray;
if (@match)    {
foreach(@match) {
#        system("echo $lookfor was found. |mail -s "$0 - $sysName"
[Email Removed]");
print"$lookfor was found, an email has been sent.n";
exit(1);
}
}

print"EVERYTHING RAN OKnn";

exit(0);


John W. Krahn
[Email Removed] wrote:
QUOTE

I found this in a template for creating subroutines, this is the base
that is created when you use the template to create the subroutine.

So now the newbie part, why would you place "my  $par1 = shift;" in the
subroutine template, and what does it do??

Basically I am trying to find out if I need to modify the template or
not. Any help would be greatly appreciated.

Oh and btw I looked at the shift function and it applies to the @_
array, which is not being used in this subroutine, and neither is @par1
, so my only guess would be that the template is creating a verifiably
empty variable called $par1 .

sub Irfan
{
my  $par1 = shift;

return ;
} # ----------  end of subroutine Irfan  ----------

Inside of a subroutine "shift" with no arguments is the same as "shift @_" and
outside of a subroutine "shift" with no arguments is the same as "shift
@ARGV". You can read all about subroutines in the perlsub document.

perldoc perlsub



John
--
use Perl;
program
fulfillment

Philipp Traeder
On Sunday 29 August 2004 HH:58:18, Jenda Krynicky wrote:
QUOTE
From: Philipp Traeder <[Email Removed]

You're right - the problem I'm trying to solve is quite restricted -
and I'm very thankful for this ;-) Basically, I'm trying to write an
application that "recognizes" log file formats, so that the following
lines are identified as several manifestations of the same log
message:

could not delete user 3248234
could not delete user 2348723

or even

failed to connect to primary server
failed to connect to secondary server

What I would like to see is a count of how many "manifestations" of
each log message are being thrown, independently of the actual data
they might contain. Since I do not want to hardcode the log messages
into my application, I would like to generate regexes on the fly as
they are needed.

Well and how are you going to tell the program which messages to take
as the same?
Do you plan to teach the app as it reads the lines? Do you want it to
ask which group is a line that doesn't match any of the regexps so
far and have the regexp modified on the fly to match that line as
well?

Or what do you want to do?

IMHO it might be best to use handmade regexps, just don't have them
built into the application, but read from a config file. That is for
each type of logs you'd have a file with something like this:

delete_user=^could not delete user d+
connect=^failed to connect to (?:primary|secondary) server
...

read the file, compile the regexps with qr// and have the application
try to match them and have the messages counted in the first group
whose regexp matches.


Do I make sense or am I babbling nonsense?

You're making perfect sense - the problem is not as trivial as I thought
originally, but I think it's not that bad as long as you don't require a
precision of 100%.

In a perl script I wrote some time ago, I'm grouping log messages by comparing
them word by word, using the String::Compare module like this:

compare($message1, $message2, word_by_word => 5);

If I read the module's code correctly, the strings are split up by whitespace
and then compared char by char. Using this approach, I get a high similarity
even if the differing parts of the strings do not have the same length, like
in
failed to connect to primary server
failed to connect to secondary server

What I did now was to extend String::Compare in a way that it records the
differing parts of the strings in a string array for each string (actually, I
did not extend String::Compare, but ported it to Java, because I'm writing
the application in Java, but the idea should be the same) and returns a
"wildcarded" version of the string, i.e. a version that replaces each
character that is not identical in both strings with a wildcard string.

Currently, I'm not using the regexp that is generated in this way for matching
new messages, because I ran in some kind of deadlock: What should I do when I
get a message for which I do not have a matching regexp yet? Since I do have
only one occurence of this message so far, I can not detect a pattern, thus I
can not generate a regexp. Therefore, I've got to compare all messages that
follow in the method described above against the real messages, not against a
wildcarded version.
Anyway - if you choose the wildcard-character wisely, I think you should be
able to generate a regexp that is surely not as good as one written by a
human, but probably good enough (e.g. you could take
(.*?)
as wildcard character for each differing "word").

At the moment, this should be enough to solve my problem - I'm already using
the word-by-word string comparison successfully, and it looks as if the
ported/extended java version of String::Compare would do what I need.
Nevertheless I could imagine that you could build better regexps by comparing
the data that you extracted from the message (since I need to extract the
data anyway to use them later, this is a very likely option). Let's say I've
got the following log messages taken from a web application:

30/08/2004 23:25:01 processed request for a.html - took 35 ms
30/08/2004 23:25:05 processed request for ab.html - took 42 ms
30/08/2004 23:25:05 processed request for a.html - took 37 ms

My application compares the messages, detects that they are very similar, and
creates the following pattern (assuming that the wildcard char is an asterisk
and that a multi-character difference is replaced by one wildcard char):

30/08/2004 23:25:* processed request for *.html - took * ms

The differing data it extracts for the three lines is this:

01 a 35
05 ab 42
05 a 37

Going over the individual "columns" of data, the application could try to
match some pre-declared data formats, i.e. it could check if all values match
certain patterns like "d+", "[azAZ]+" etc. If it finds a matching format, it
could adapt the regexp so that it matches more fine-grained.

You could object (and if I understood your mail correctly, you already did)
that the application created a wrong pattern by taking the date (including
the minutes, but not the seconds) as fixed - a log message that arrives a
minute later would not fit the regexp anymore. This is a problem if I'm
trying to use the regexp to match the messages, but not if I'm comparing the
messages as strings again (as described above).

Writing this, I think you're right - my problem is probably not solvable by
generating regexps on the fly, but only (hopefully) by comparing strings on a
more brute-force level. It might be an option to try to use regexps in order
to speed up the process, but if you do not find a matching regexp, you
probably need to go back to comparing strings again...

I've not finished the application yet, so I can't say if all of this is going
to work, but I'm quite optimistic at the moment. With a bit of luck, I can
show you a working version in a few weeks (FWIW: The application I'm talking
about will be a log4j server application - similar to chainsaw, but built for
the application operators as opposed to the developers).

Thank you for your insightful questions and suggestions - I appreciate
very much the opportunity to discuss those problems before running against too
many walls. :-)

Philipp

Sudhakar Gajjala
It works for me . Thanks

Sudhakar Gajjala





Chris Devers <[Email Removed]> on 08/30/2004 10:47:55 PM

Please respond to [Email Removed]

To: Sudhakar Gajjala/C/UTStarcom@UTStarcom
cc: [Email Removed]
Subject: Re: How do i run shell command


On Mon, 30 Aug 2004 [Email Removed] wrote:

QUOTE
I was trying to run System command from my perl Script . As i have pipe
( |
Anybody help me how to run shell command in Perl

Here is the command :  system "cat $filename | wc -l";

You realize, of course, that this can be done entirely in Perl ?

Quoting from the excellent _Perl Cookbook_:

[...] you can emulate wc by opening up and reading the file yourself:

open(FILE, "< $file") or die "can't open $file: $!";
$count++ while <FILE>;
# $count now holds the number of lines read

Another way of writing this is:

open(FILE, "< $file") or die "can't open $file: $!";
for ($count=0; <FILE>; $count++) { }

If you're not reading from any other files, you don't need the $count
variable in this case. The special variable $. holds the number of
lines read since a filehandle was last explicitly closed:

1 while <FILE>;
$count = $.;

This reads all the records in the file and discards them.

But if you really do need to do this via a system command -- you don't,
but I'll play along -- then the command as you've given it is what is
known as a Useless Use Of Cat.

This command --

cat file | wc -l

-- is equivalent to this one --

wc -l file

-- but the latter invokes less overhead, and so should be a bit faster.

Unless you really are conCATenating a chain of files together, most
commands of the form "cat foo | cmd" can be rewritten as "cmd foo" or,
maybe, "cmd < foo".



--
Chris Devers [Email Removed]
http://devers.homeip.net:8080/blog/

np: 'It's Not Easy Being Green (lo-fi midi version)'
by Kermit
from 'The Muppet Movie Soundtrack'

Radhika Sambamurti
Hi,
was trying to reproduce the code below.
I was wondering what the 1 is doing before the while. Is it the exit
status of the while, that is until eof is reached and exit code = 1 ?

thanks,
radhika

QUOTE
If you're not reading from any other files, you don't need the
$count
variable in this case. The special variable $. holds the number of
lines read since a filehandle was last explicitly closed:

1 while <FILE>;
$count = $.;

This reads all the records in the file and discards them.

But if you really do need to do this via a system command -- you don't,
but I'll play along -- then the command as you've given it is what is
known as a Useless Use Of Cat.

This command --

cat file | wc -l

-- is equivalent to this one --

wc -l file

-- but the latter invokes less overhead, and so should be a bit faster.

Unless you really are conCATenating a chain of files together, most
commands of the form "cat foo | cmd" can be rewritten as "cmd foo" or,
maybe, "cmd < foo".



--
Chris Devers      [Email Removed]
http://devers.homeip.net:8080/blog/

np: 'It's Not Easy Being Green (lo-fi midi version)'
by Kermit
from 'The Muppet Movie Soundtrack'




Chris,

You are exactly right, that is a useless use of cat, old habits die
hard. And of course you are correct in that it can be done entirely in
perl, the availability of the shell cmd wc makes us lazy, and we don't
want to code what we can just call from the system.

Chris Hood




--
To unsubscribe, e-mail: [Email Removed]
For additional commands, e-mail: [Email Removed]
<http://learn.perl.org/> <http://learn.perl.org/first-response





--
It's all a matter of perspective. You can choose your view by choosing
where to stand.
Larry Wall
---

Christopher L Hood
-----Original Message-----
From: [Email Removed] [mailto:[Email Removed]]
Sent: Tuesday, August 31, 2004 12:10 AM
To: [Email Removed]
Subject: Re: How do i run shell command


It works for me . Thanks

Sudhakar Gajjala





Chris Devers <[Email Removed]> on 08/30/2004 10:47:55 PM

Please respond to [Email Removed]

To: Sudhakar Gajjala/C/UTStarcom@UTStarcom
cc: [Email Removed]
Subject: Re: How do i run shell command


On Mon, 30 Aug 2004 [Email Removed] wrote:

QUOTE
I was trying to run System command from my perl Script . As i have
pipe

( |
QUOTE
Anybody help me how to run shell command in Perl

Here is the command :  system "cat $filename | wc -l";

You realize, of course, that this can be done entirely in Perl ?

Quoting from the excellent _Perl Cookbook_:

[...] you can emulate wc by opening up and reading the file
yourself:

open(FILE, "< $file") or die "can't open $file: $!";
$count++ while <FILE>;
# $count now holds the number of lines read

Another way of writing this is:

open(FILE, "< $file") or die "can't open $file: $!";
for ($count=0; <FILE>; $count++) { }

If you're not reading from any other files, you don't need the
$count
variable in this case. The special variable $. holds the number of
lines read since a filehandle was last explicitly closed:

1 while <FILE>;
$count = $.;

This reads all the records in the file and discards them.

But if you really do need to do this via a system command -- you don't,
but I'll play along -- then the command as you've given it is what is
known as a Useless Use Of Cat.

This command --

cat file | wc -l

-- is equivalent to this one --

wc -l file

-- but the latter invokes less overhead, and so should be a bit faster.

Unless you really are conCATenating a chain of files together, most
commands of the form "cat foo | cmd" can be rewritten as "cmd foo" or,
maybe, "cmd < foo".



--
Chris Devers [Email Removed]
http://devers.homeip.net:8080/blog/

np: 'It's Not Easy Being Green (lo-fi midi version)'
by Kermit
from 'The Muppet Movie Soundtrack'




Chris,

You are exactly right, that is a useless use of cat, old habits die
hard. And of course you are correct in that it can be done entirely in
perl, the availability of the shell cmd wc makes us lazy, and we don't
want to code what we can just call from the system.

Chris Hood

Wiggins d Anconia
QUOTE


Chris,

You are exactly right, that is a useless use of cat, old habits die
hard. And of course you are correct in that it can be done entirely in
perl, the availability of the shell cmd wc makes us lazy, and we don't
want to code what we can just call from the system.

Chris Hood


Except laziness is a virtue, this is just insufficient code, not the
"good" laziness. If you coded it up the way it *should* be done for
portability, security, proper error handling, etc. it would in the end
be longer.

my @array;
tie @array, 'Tie::File', $filename or die "Can't tie file: $!";
my $length = @array;

Tough to beat....

http://danconia.org

Jenda Krynicky
From: Eduardo Vzquez Rodrguez <[Email Removed]>
QUOTE
Hello everybody out there using Perl- Im doing a perl scripts, which
objective is to parse text.

I am looking for a way of creating a "perl binary", my intention is
that no one can read the scripts in a human readable way.

You can't.
You can make it unreadable to most, you can make it more or less hard
to others, but you can't do anything to prevent a seasoned Perl
hacker to get to see your code.

You want to go to
http://www.perlmonks.org/index.pl?node=Super%20Search
and search for "hide script source". This has been discussed quite a
few times on PerlMonks.

Jenda
===== [Email Removed] === http://Jenda.Krynicky.cz ====When it comes to wine, women and song, wizards are allowed
to get drunk and croon as much as they like.
-- Terry Pratchett in Sourcery

Charles K. Clarkson
From: Radhika Sambamurti <mailto:[Email Removed]> wrote:

: thanks,
: radhika
:
: : If you're not reading from any other files, you don't
: : need the $count variable in this case. The special
: : variable $. holds the number of lines read since a
: : filehandle was last explicitly closed:
: :
: : 1 while <FILE>;
: : $count = $.;
: :
: : This reads all the records in the file and discards them.
: :
: Hi,
: was trying to reproduce the code [above].
: I was wondering what the 1 is doing before the while. Is
: it the exit status of the while, that is until eof is
: reached and exit code = 1 ?


'while' can be used as a statement modifier. When used
that way, it places each successive value in the $_ variable
and the line number of the file in the variable $. (And a
number of other things.)

If the statement doesn't do anything with $_ and doesn't
produce any other effect, it becomes irrelevant. 1 and 0
won't raise errors under strict and warnings.

0;
1;
2; # <--- raises a constant in void context warning.


HTH,

Charles K. Clarkson
--
Mobile Homes Specialist


Chris Devers
On Tue, 31 Aug 2004, Pothula, Giridhar wrote:

QUOTE
Do I have use some modules for this to be executed successfully? I am
getting a blank page when I run this asp file!

I don't know the first thing about ASP programming, sorry.

You can try the [Email Removed] list, but I don't know how much
expertise they would have with ASP either...


--
Chris Devers [Email Removed]
http://devers.homeip.net:8080/blog/

np: 'Movin' Right Along (lo-fi midi version)'
by The Muppets
from 'The Muppet Movie Soundtrack'

Bob Showalter
Pothula, Giridhar wrote:
QUOTE
Hi All,

I am trying to get a code snippet for the client side Perl script in
an ASP page which accesses XML file residing on the server.

Hmm, not sure what you mean by "client side". Both ASP and Perl are
server-side technologies.

Anyway, you might want to start at http://perl-xml.sourceforge.net/faq/

Charles K. Clarkson
From: Bob Showalter <mailto:[Email Removed]> wrote:

: Pothula, Giridhar wrote:
: : Hi All,
: :
: : I am trying to get a code snippet for the client side Perl
: : script in an ASP page which accesses XML file residing on the
: : server.
:
: Hmm, not sure what you mean by "client side". Both ASP and Perl
: are server-side technologies.
:
: Anyway, you might want to start at
: http://perl-xml.sourceforge.net/faq/

Perlscript can also be run as a client side script like
Javascript and VBscript. It just seems like an unlikely
scenario.

HTH,

Charles K. Clarkson
--
Mobile Homes Specialist


Bob Showalter
Pothula, Giridhar wrote:

Hi. Top-post please. http://home.in.tum.de/~jain/software/outlook-quotefix/

QUOTE
Sorry...That was a typo. I would like to use PERL script to read the
XML file (Text of the nodes). This is basically to customize the UI
skins. All the skin values like color, images, font etc will be
stored in an XML file.

I would like to read from the XML file to generate the HTML code
dynamically.

OK, well the faq I pointed you to will give you some ideas of the overall
topic of parsing XML with Perl. Lots of ways to approach it, so get an
overview before diving in.

QUOTE

-----Original Message-----
From: Bob Showalter [mailto:[Email Removed]]
Sent: Tuesday, August 31, 2004 11:16 AM
To: Pothula, Giridhar; [Email Removed]
Subject: RE: Perl Script for accessing XML file

Pothula, Giridhar wrote:
Hi All,

I am trying to get a code snippet for the client side Perl script in
an ASP page which accesses XML file residing on the server.

Hmm, not sure what you mean by "client side". Both ASP and Perl are
server-side technologies.

Anyway, you might want to start at
http://perl-xml.sourceforge.net/faq/


Chris Devers
On Tue, 31 Aug 2004, Pothula, Giridhar wrote:

QUOTE
I have to read node values present in the XML file(residing on server)
from the ASP page using some scripting technology. As I am using Apache
web server, I have to use Perl scripting for this req. So, I am trying
to write server side script in the ASP page.

If you're using Apache, there's no reason to use ASP if you don't have
to; this should work with straight Perl/CGI or mod_perl instead.

Would doing this as a regular CGI script be an option for you? You'll
probably have much better luck finding help if you can do that...


(Also, it's "Perl" for the language, "perl" for the program that runs
scripts written in the language, and never ever "PERL" for anything.)



--
Chris Devers [Email Removed]
http://devers.homeip.net:8080/blog/

np: 'It's Not Easy Being Green (lo-fi midi version)'
by Kermit
from 'The Muppet Movie Soundtrack'

Jenda Krynicky
From: "Pothula, Giridhar" <[Email Removed]>
QUOTE
Problem: I would like to use PERL script to read the XML file (Text of
the nodes). This is basically to customize the UI skins. All the skin
values like color, images, font etc will be stored in an XML file.

I would like to read from the XML file to generate the HTML code
dynamically.

It would be much easier using XML::Simple.

...
use XML::Simple;

my $data = XMLin('CustomSkins.xml');
print $data->{to};
...

Jenda
(Code is untested but should be about right.)

===== [Email Removed] === http://Jenda.Krynicky.cz =====
When it comes to wine, women and song, wizards are allowed
to get drunk and croon as much as they like.
-- Terry Pratchett in Sourcery

Jenda Krynicky
From: "Pothula, Giridhar" <[Email Removed]>
To: "Jenda Krynicky" <[Email Removed]>, <[Email Removed]>

Please don't do this. I get the mail through the mailing list I don't
need to get it directly as well. It gets sorted into the same folder
anyway so you are not going to get a reply sooner.

QUOTE
I tried that but it didn't work. It is not recognizing keywords like
"use" and "my".

Please send us
1. the complete ASP you have
2. the exact errors you get
3. your operating system version, your web server version, your perl
version (run perl -v from command prompt)

Jenda
===== [Email Removed] === http://Jenda.Krynicky.cz =====
When it comes to wine, women and song, wizards are allowed
to get drunk and croon as much as they like.
-- Terry Pratchett in Sourcery

JupiterHost.Net
QUOTE
Seems the module docs are incorrect. This seems to work for me:

Sorry for the delay :)

QUOTE
pod2html("d:\perl\site\lib\$pmfile",
"--title=$pmfile pod2html",
'--backlink=Back to Top',
'--css=http://search.cpan.org/s/style.css',
"--cachedir=c:\temp",
);

That is it seems the first parameter seems to be the file to process
not the 'pod2html' constant as it seems from the docs.

As you can see I had to specify the --cachedir as well since the
script tries to create some files and it did not have write perms to
current directory.

Yep that did it! Thanks :)

QUOTE
This way it works, what I do not understand is that if I run the
script from the command line it finishes in under a second, if I
start it via web it takes about 20s. GOK.

That is weird, mine finishes the same either way, just not the right
goodies :)

Have a good one Jenda!

Lee.M - JupiterHost.Net

./Rob &
"seane" <[Email Removed]> wrote in message
news:[Email Removed]...
QUOTE
I have this code and receive the message "Can't modify subroutine
entry in scalar assignment at script.pl line 165, near "$_;" " when
running on perl, version 5.005_03 but not when running on perl version
v5.6.1, I know the easiest answer would be to upgrade the older verson
of perl but I have NO control over that.

Is there a change I can make for this to work on 5.005.03 as well.

I appreciate your help:

$conmsg=("Agent is now connected");
$disconmsg=("Agent is now disconnected");

foreach (@logarray)
{
our $msg = $_;            #this is line 165.
if ($msg=~ "$conmsg")
{

Here is what I would do:

foreach(@logaray) {
if(m/$conmsg/) { # or # if($_ =~ m/$conmsg/) {

I'm not sure why you're using 'our' verus 'my'?

foreach(@array) { my $msg = $_;

Wiggins d Anconia
QUOTE
Hi guys (and gals!),

I want to compare a constant, known (expected values) array with the
results I'm collecting in another array.

Something like this, but I don't think this works the way I want it to:

my @rray1 = qw( One Two Three );
chomp( my @rray2 = <STDIN> );

print "The 2 arrays are the samen" if( @rray1 eq @rray2 );

So that if I enter:
One<enter
Two<enter
Three<enter
<ctrl-D
on the terminal when I run the code, It will print "The 2 arrays are
the same" on the next line.

However, my tests seem to indicate I don't really know what's
happeneing when I:
@rray1 eq @rray2

Can someone help me?

--Errin

The above does not work because 'eq' is forcing the arrays into scalar
context, therefore you are testing whether the length of the arrays is
string (not numeric that would be C<==>) equal. But how to do this can
be found in the FAQ,

perldoc -q 'test whether two arrays or hashes are equal'

perldoc perlfaq4, has some other juicy bits about analysing arrays and
other data stores.

http://danconia.org

scott
Have you checked out the latest version of Chainsaw?
http://logging.apache.org/log4j/docs/chainsaw.html

The latest version is a lot more flexible and could easily run in a
configuration that would be useful to operators (collect ERROR and
WARN events from multiple sources into a single tab in the ui, for
example).

Chainsaw can load arbitrary log files into the UI - the user specifies
the pattern in the file using a set of pre-defined keywords. The file
can be tailed as well if you like.

Once the events are loaded in Chainsaw, you can use the simple
expression syntax to find/colorize and filter events (this expression
syntax includes support for regular expressions).

If you have a Java VM, there's a link on the Chainsaw page above that
will start Chainsaw without requiring a manual install (WebStart-based
app).

There is a tutorial available from the Welcome tab. If you have
questions, feel free to email the log4j mailing list:
http://logging.apache.org/site/mailing-lists.html

By the way, log4perl can also send events to Chainsaw, either using
the log file receiver I mentioned above, or by sending events over a
socket. See http://log4perl.sourceforge.net/releases/L.../FAQ.html#ec4ff

Scott


[Email Removed] (Philipp Traeder) wrote in message news:<[Email Removed]>...
QUOTE
On Sunday 29 August 2004 HH:58:18, Jenda Krynicky wrote:
From: Philipp Traeder <[Email Removed]

You're right - the problem I'm trying to solve is quite restricted -
and I'm very thankful for this ;-) Basically, I'm trying to write an
application that "recognizes" log file formats, so that the following
lines are identified as several manifestations of the same log
message:

could not delete user 3248234
could not delete user 2348723

or even

failed to connect to primary server
failed to connect to secondary server

What I would like to see is a count of how many "manifestations" of
each log message are being thrown, independently of the actual data
they might contain. Since I do not want to hardcode the log messages
into my application, I would like to generate regexes on the fly as
they are needed.

Well and how are you going to tell the program which messages to take
as the same?
Do you plan to teach the app as it reads the lines? Do you want it to
ask which group is a line that doesn't match any of the regexps so
far and have the regexp modified on the fly to match that line as
well?

Or what do you want to do?

IMHO it might be best to use handmade regexps, just don't have them
built into the application, but read from a config file. That is for
each type of logs you'd have a file with something like this:

delete_user=^could not delete user d+
connect=^failed to connect to (?:primary|secondary) server
...

read the file, compile the regexps with qr// and have the application
try to match them and have the messages counted in the first group
whose regexp matches.


Do I make sense or am I babbling nonsense?

You're making perfect sense - the problem is not as trivial as I thought
originally, but I think it's not that bad as long as you don't require a
precision of 100%.

In a perl script I wrote some time ago, I'm grouping log messages by comparing
them word by word, using the String::Compare module like this:

compare($message1, $message2, word_by_word => 5);

If I read the module's code correctly, the strings are split up by whitespace
and then compared char by char. Using this approach, I get a high similarity
even if the differing parts of the strings do not have the same length, like
in
failed to connect to primary server
failed to connect to secondary server

What I did now was to extend String::Compare in a way that it records the
differing parts of the strings in a string array for each string (actually, I
did not extend String::Compare, but ported it to Java, because I'm writing
the application in Java, but the idea should be the same) and returns a
"wildcarded" version of the string, i.e. a version that replaces each
character that is not identical in both strings with a wildcard string.

Currently, I'm not using the regexp that is generated in this way for matching
new messages, because I ran in some kind of deadlock: What should I do when I
get a message for which I do not have a matching regexp yet? Since I do have
only one occurence of this message so far, I can not detect a pattern, thus I
can not generate a regexp. Therefore, I've got to compare all messages that
follow in the method described above against the real messages, not against a
wildcarded version.
Anyway - if you choose the wildcard-character wisely, I think you should be
able to generate a regexp that is surely not as good as one written by a
human, but probably good enough (e.g. you could take
(.*?)
as wildcard character for each differing "word").

At the moment, this should be enough to solve my problem - I'm already using
the word-by-word string comparison successfully, and it looks as if the
ported/extended java version of String::Compare would do what I need.
Nevertheless I could imagine that you could build better regexps by comparing
the data that you extracted from the message (since I need to extract the
data anyway to use them later, this is a very likely option). Let's say I've
got the following log messages taken from a web application:

30/08/2004 23:25:01 processed request for a.html - took 35 ms
30/08/2004 23:25:05 processed request for ab.html - took 42 ms
30/08/2004 23:25:05 processed request for a.html - took 37 ms

My application compares the messages, detects that they are very similar, and
creates the following pattern (assuming that the wildcard char is an asterisk
and that a multi-character difference is replaced by one wildcard char):

30/08/2004 23:25:* processed request for *.html - took * ms

The differing data it extracts for the three lines is this:

01  a  35
05  ab  42
05  a  37

Going over the individual "columns" of data, the application could try to
match some pre-declared data formats, i.e. it could check if all values match
certain patterns like "d+", "[azAZ]+" etc. If it finds a matching format, it
could adapt the regexp so that it matches more fine-grained.

You could object (and if I understood your mail correctly, you already did)
that the application created a wrong pattern by taking the date (including
the minutes, but not the seconds) as fixed - a log message that arrives a
minute later would not fit the regexp anymore. This is a problem if I'm
trying to use the regexp to match the messages, but not if I'm comparing the
messages as strings again (as described above).

Writing this, I think you're right - my problem is probably not solvable by
generating regexps on the fly, but only (hopefully) by comparing strings on a
more brute-force level. It might be an option to try to use regexps in order
to speed up the process, but if you do not find a matching regexp, you
probably need to go back to comparing strings again...

I've not finished the application yet, so I can't say if all of this is going
to work, but I'm quite optimistic at the moment. With a bit of luck, I can
show you a working version in a few weeks (FWIW: The application I'm talking
about will be a log4j server application - similar to chainsaw, but built for
the application operators as opposed to the developers).

Thank you for your insightful questions and suggestions - I appreciate
very much the opportunity to discuss those problems before running against too
many walls. :-)

Philipp


Brian Volk
Thank you Wiggins! I have changed everything that you suggested and I think
I am much closer. However, I have run into an error w/ the Read line.

foreach my $file (@files) {
$image->Read (file=> $file)

Bad filehandle: brian.jpg at C:/Program Files/PerlEdit/scripts/test_3.pl
line 17

as you can see the script is seeing the file name in the image directory. I
re-read chapter 11 Learning Perl on Bad Filehandles but I'm still having
trouble. Any suggestion would be greatly appreciated.

Thanks!

Brian


-----Original Message-----
From: Wiggins d Anconia [mailto:[Email Removed]]
Sent: Thursday, September 09, 2004 12:28 PM
To: Brian Volk; Beginners (E-mail)
Subject: Re: perl crashing at $image->Read (file=> *ARGV);


QUOTE

Hi All,

I my perl script is crashing perl at this line;

$image->Read (file=> *ARGV);

I know that it is this line because I have commented out everything else
around it.  When I just have the Read statment, perl will crash.  Here is
the script, can someone please suggest what I am doing wrong.

Thanks!

----------------------------------------------------------------------------
---------------------
#!/user/local/bin/perl -w

use strict;
use Image::Magick;

my $images = "C:/images";
opendir (IMAGES, $images) or die "can not open $images: $!";

# load @ARGV for (<>)

@ARGV = map { "$images/$_" } grep { !/^./ } readdir IMAGES;


my @files = map { "$images/$_" } grep { !/^./ } readdir IMAGES;

Not sure why you are using @ARGV just for its special qualities (aka the
<> operator) why not name our variables, we are allowed too.

QUOTE
my $image = Image::Magick->new(magick=>'JPEG');

# Read images, Scale down and Write to new directory

while (<>) {

No need to use a while here, since you already have a complete array,

foreach my $file (@files) {

QUOTE
$image->Read (file=> *ARGV)

By naming our variables we now see that we are dealing with a filename,
rather than a typeglob reference.

$image->Read(file => $file)

The example you are using assumes that *ARGV contains an opened
filehandle to the file itself, *but* within the loop you are executing
on each line. See perldoc perlop for more. I would skip using the
special nature of the variables until you understand them. Try using
specifically named variables until the program works, then reduce it if
you must.

QUOTE
and $image->Scale (width=>'50', height=>'50')
and $image->Write ("C:/images")

C<Write> expects a filename argument, not a directory, or a handle.

QUOTE
and close (ARGV);

Not sure why these statements are strung together with C<and> they can
be separate, you haven't really benefited by making them a single
statement. And you wouldn't normally close ARGV.

QUOTE
}

closedir IMAGES;


You can close your dir earlier in the process, since you are done
reading from it.

QUOTE
Brian Volk
HP Products

<mailto:[Email Removed]> [Email Removed]

Daniel Staal
--As of Thursday, September 9, 2004 9:24 AM -0500, Errin Larsen is alleged
to have said:

QUOTE
Excellent!  Thank you.  I knew it was something easy, just hadn't
kick-started my brain yet this morning.  But I've got another one.
What if the user input, say, '007' on the command line?  How can I
strip that off?  I think I can check for it with something like this:

/^0?[1-9]/

But If I find it, how do I strip it off?

--As for the rest, it is mine.

Don't bother. ;)

It's a number, after all. Perl will remove leading zeros for you, as long
as there are only numbers. Strip out anything that isn't valid, and add
the zero when you need it. (I suggest sprintf, personally.)

Daniel T. Staal

---------------------------------------------------------------
This email copyright the author. Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes. This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---------------------------------------------------------------

Wiggins d'Anconia
Please bottom post....

Brian Volk wrote:
QUOTE
Thank you Wiggins!  I have changed everything that you suggested and I think
I am much closer.  However, I have run into an error w/ the Read line.

foreach my $file (@files) {
$image->Read (file=> $file)

Bad filehandle: brian.jpg at C:/Program Files/PerlEdit/scripts/test_3.pl
line 17

as you can see the script is seeing the file name in the image directory.  I
re-read chapter 11 Learning Perl on Bad Filehandles but I'm still having
trouble.  Any suggestion would be greatly appreciated.

Thanks!

Brian


Either....

It appears that the docs for I::M are incorrect and that C<Read> and
C<Write> must take a filehandle. Difficult to tell since all the code
is XS/C and I didn't feel like popping the hood on it. You could try
switching back to using a handle but I would be more specific about it,
so for instance, within the foreach you would have:

open my $READHANDLE, $filename or die "Can't open file for reading: $!";
$image->Read('file' => $READHANDLE);

etc.

Or there is an issue with the installation, paths, etc. on Windows. You
should retrieve the actual error message from I::M and see what it says,
similar to,

my $result = $image->Read('file' => $filename);
print $result;

Have you checked out the info at:

http://www.dylanbeattie.net/magick/

It appears to be good info for Win32 specific stuff related to I::M.
Personally I can't even test it here so it is difficult for me to point
you in the right direction. Maybe one of the other M$ users will chime
in ...

http://danconia.org


QUOTE

-----Original Message-----
From: Wiggins d Anconia [mailto:[Email Removed]]
Sent: Thursday, September 09, 2004 12:28 PM
To: Brian Volk; Beginners (E-mail)
Subject: Re: perl crashing at $image->Read (file=> *ARGV);



Hi All,

I my perl script is crashing perl at this line;

$image->Read (file=> *ARGV);

I know that it is this line because I have commented out everything else
around it.  When I just have the Read statment, perl will crash.  Here is
the script, can someone please suggest what I am doing wrong.

Thanks!


----------------------------------------------------------------------------

---------------------
#!/user/local/bin/perl -w

use strict;
use Image::Magick;

my $images = "C:/images";
opendir (IMAGES, $images) or die "can not open $images: $!";

# load @ARGV for (<>)

@ARGV = map { "$images/$_" } grep { !/^./ } readdir IMAGES;



my @files = map { "$images/$_" } grep { !/^./ } readdir IMAGES;

Not sure why you are using @ARGV just for its special qualities (aka the
<> operator) why not name our variables, we are allowed too.


my $image = Image::Magick->new(magick=>'JPEG');

# Read images, Scale down and Write to new directory

while (<>) {


No need to use a while here, since you already have a complete array,

foreach my $file (@files) {


$image->Read (file=> *ARGV)


By naming our variables we now see that we are dealing with a filename,
rather than a typeglob reference.

$image->Read(file => $file)

The example you are using assumes that *ARGV contains an opened
filehandle to the file itself, *but* within the loop you are executing
on each line. See perldoc perlop for more. I would skip using the
special nature of the variables until you understand them. Try using
specifically named variables until the program works, then reduce it if
you must.


and $image->Scale (width=>'50', height=>'50')
and $image->Write ("C:/images")


C<Write> expects a filename argument, not a directory, or a handle.


and close (ARGV);


Not sure why these statements are strung together with C<and> they can
be separate, you haven't really benefited by making them a single
statement. And you wouldn't normally close ARGV.


}

closedir IMAGES;



You can close your dir earlier in the process, since you are done
reading from it.


Brian Volk
HP Products

<mailto:[Email Removed]> [Email Removed]



http://danconia.org



Tim,
You should close your file handles in your parsing code, before you unlink
hth,
Mark G

----- Original Message -----
From: Tim Donahue <[Email Removed]>
Date: Friday, September 10, 2004 2:58 pm
Subject: Removing a tempdir's on Windows

QUOTE
Hello, I am writing a custom log parser for our Squid proxy, and I
haverun into some problems with trying to use a temporary
directory.  The
script parses all the logs, dumping those that are of interest to
us for
the various parts of the report to smaller, easier to handle
files.  I
am using the following statement to create a temporary directory which
works out great, removing most of the files, unfortunately I can not
seem to make it remove all the files.

our $templogs_base = 'C:squidvarlogSquidLog';
our $templogs = tempdir( DIR => $templogs_base, CLEANUP => 1 );


When I run the script, I get the following errors:

Can't unlink file C:squidvarlogSquidLog5FIwr0r4nz/tmp-
stb.com.txt: Permission denied at C:/SFU/Perl/lib/File/Temp.pm
line 845
Can't remove directory C:squidvarlogSquidLog5FIwr0r4nz:
Directory not empty at C:/SFU/Perl/lib/File/Temp.pm line 845

I have thought about manually running unlink on all the files
contained
within that directory, however that seems like an ugly kludge to force
the removal of the temporary directory.

Can anyone shed some light on a clean way to do this?

Tim Donahue



--
To unsubscribe, e-mail: [Email Removed]
For additional commands, e-mail: [Email Removed]
<http://learn.perl.org/> <http://learn.perl.org/first-response




Tim Donahue
Mark, you are correct it would help if I had closed the file handles.
That is what I get for trusting a friend's code, and not reviewing all
it when I add it to my project.

Thanks for your help.

Tim Donahue

On Fri, 2004-09-10 at 15:05, [Email Removed] wrote:
QUOTE
Tim,
You should close your file handles in your parsing code, before you unlink
hth,
Mark G

----- Original Message -----
From: Tim Donahue <[Email Removed]
Date: Friday, September 10, 2004 2:58 pm
Subject: Removing a tempdir's on Windows

Hello, I am writing a custom log parser for our Squid proxy, and I
haverun into some problems with trying to use a temporary
directory.  The
script parses all the logs, dumping those that are of interest to
us for
the various parts of the report to smaller, easier to handle
files.  I
am using the following statement to create a temporary directory which
works out great, removing most of the files, unfortunately I can not
seem to make it remove all the files.

our $templogs_base = 'C:squidvarlogSquidLog';
our $templogs = tempdir( DIR => $templogs_base, CLEANUP => 1 );


When I run the script, I get the following errors:

Can't unlink file C:squidvarlogSquidLog5FIwr0r4nz/tmp-
stb.com.txt: Permission denied at C:/SFU/Perl/lib/File/Temp.pm
line 845
Can't remove directory C:squidvarlogSquidLog5FIwr0r4nz:
Directory not empty at C:/SFU/Perl/lib/File/Temp.pm line 845

I have thought about manually running unlink on all the files
contained
within that directory, however that seems like an ugly kludge to force
the removal of the temporary directory.

Can anyone shed some light on a clean way to do this?

Tim Donahue



--
To unsubscribe, e-mail: [Email Removed]
For additional commands, e-mail: [Email Removed]
<http://learn.perl.org/> <http://learn.perl.org/first-response





Johann Spies
On Thu, Sep 09, 2004 at 02:57:32PM -0400, Chris Devers wrote:
QUOTE
On Thu, 9 Sep 2004, Johann Spies wrote:

We have a situation that we need to open a dbm-file but cannot do so
using perl version 5.8.4-2 on Debian Sarge.  The following script
fails, but the same script and dbm-file works on Woody with perl 5.6:

Apparently, DBM files can get messed up if you create them with one
version of the DBM but then work with them under another. Unlike about
all other modules, this is something that has to be handled when Perl
itself is compiled, and the DBM engine that that instance of Perl is
linked to has to stay the same forever. Or something. It's complicated.

See <http://www.perldoc.com/perl5.8.4/pod/perltrap.html#DBM-Traps> (scan
down a bit for the string 'dbm' -- the anchor doesn't actually go to the
right section of the document, but it's almost at the end).

My understanding is that the best workaround for this these days is to
use AnyDBM_File, or a "real" database engine & driver. See:

<http://www.perldoc.com/perl5.8.4/lib/AnyDBM_File.html
<http://search.cpan.org/~nwclark/perl-5.8.5/lib/AnyDBM_File.pm

For things that used to be done in DBM variants, SQLite has been getting
very popular recently. It's a small, fast, way to get SQL query access
to structured disc files (which arguably makes it a better MySQL than
MySQL, considering the features that database has been adding lately).
Moreover, it shouldn't have these problems with static linking that have
been annoying DBM users for years now.

Read more about it:

<http://search.cpan.org/~msergeant/DBD-SQLite-1.04/lib/DBD/SQLite.pm
<http://www.perl.com/pub/a/2003/09/03/perlcookbook.html

If you're not already locked in with DBM, SQLite may be much easier.

Thanks. We are using WebCT here which has the DBM-technology
embedded. So SQLite is not an option in this case. We will try
AnyDBM.

Regards
Johann
--
Johann Spies Telefoon:
Informasietegnologie, Universiteit van Stellenbosch

"For my thoughts are not your thoughts, neither are
your ways my ways, saith the LORD. For as the heavens
are higher than the earth, so are my ways higher than
your ways, and my thoughts than your thoughts."
Isaiah 55:8,9

Bee
QUOTE
Have you read the pack and unpack tutorial?

perldoc perlpacktut

Thanks for this, I missed this one.

QUOTE


Q1. Can I expect that pack can do this for me ?
- compress a text file in smaller size

You could implement a compression algorithm with pack/unpack, if you really
wanted to.

- besize template 'aAuU', anything else tempplate I can use to prepare fix length data ?
- if yes, but how do I assuming the block size is? In case,  if I write a binary file and I wanna use seek.

You are going to have to explain that in more detail.


In case, I am doing something like a log with User v TimesOfSignIn. So, user name will set as 30 char long, and the Signin times is about in scope of a normal integer. I wanna make this a simple DB for this but not a list of single files for each user. So I wanna make this doable for ramdom read write, so, that should be in a binary file. I can't do random read write with a text file anyway, right ? So I don't want to use 'A' or 'a' as pack templates.

However, as the file is in a binary, I think there could be a size benfit for me to compress the data length. Actually, am I on the right point for starrting this ? Any starting hints ? or very simple example perhaprs ?


QUOTE
my  = qw/a A Z b B h H c C s S i I l L n N v V j J f d F  p P u U w x X/;

for ()
{ eval "
print " ->"; @back = unpack "333", (pack "333", @arr); print "<$_>" for @back; print "n"; " };

You *DO* *NOT* have to use eval() to do that!  The format strings are
interpolated just like any other string.

hehe... sorry for bugging =)
I made this for other copy paste job for me to eval the template vs result experiment.

Jenda Krynicky
From: "Bee" <[Email Removed]>
QUOTE
- besize template 'aAuU', anything else tempplate I can use to
prepare fix length data ? - if yes, but how do I assuming the
block size is? In case,  if I write a binary file and I wanna
use seek.

You are going to have to explain that in more detail.

In case, I am doing something like a log with User v TimesOfSignIn.
So, user name will set as 30 char long, and the Signin times is about
in scope of a normal integer. I wanna make this a simple DB for this
but not a list of single files for each user. So I wanna make this
doable for ramdom read write, so, that should be in a binary file.

You want to have a look at DBM files. Read
perldoc DB_File
or
perldoc SDBM_File

Or you may try to install DBD::SQLite. That would give you the full
power of SQL without having to install anything but the module.

QUOTE
However, as the file is in a binary, I think there could be a size
benfit for me to compress the data length.

"Premature optimization is the root of all evil."

How many users are we talking about? Millions? I would not sweat over
a few KB you might save.


Jenda
===== [Email Removed] === http://Jenda.Krynicky.cz =====
When it comes to wine, women and song, wizards are allowed
to get drunk and croon as much as they like.
-- Terry Pratchett in Sourcery

Brian Volk
Wiggins d'Anconia wrote:

QUOTE
It appears that the docs for I::M are incorrect and that C<Read> and
C<Write> must take a filehandle.  Difficult to tell since all the code
is XS/C and I didn't feel like popping the hood on it. You could try
switching back to using a handle but I would be more specific about it,
so for instance, within the foreach you would have:

open my $READHANDLE, $filename or die "Can't open file for reading: $!";
$image->Read('file' => $READHANDLE);

etc.

Or there is an issue with the installation, paths, etc. on Windows. You
should retrieve the actual error message from I::M and see what it says,
similar to,

my $result = $image->Read('file' => $filename);
print $result;

Well, I got it reading from a directory, Mike at the Image::Magick mailing
list help me out... "The read must contain the path as well as the filename.

$image->Read (file=> "$path\$file");"

but I am stil having some problems with Scale and Write. The error is:

"my" variable $img masks earlier declaration in same scope at C:/Program
Files/
erlEdit/scripts/image_test.pl line 25.
JPEG 70048.jpg
Can't call method "Scale" on an undefined value at C:/Program
Files/PerlEdit/sc
ipts/image_test.pl line 24.

As you can see the program starts to read from the image directoy, but when
it hits "Scale", I get the undefined value error. I'm not sure what that
means.

-------------------------------
#!/user/bin/perl -w

use strict;
use Image::Magick;

my $image_source_folder = "C:/images";
my $image_dest_folder = "C:/images_small";

opendir(IMAGES,$image_source_folder);
my @images_to_process_list=grep {!(/^./) && -f "$image_source_folder/$_"}
readdir(IMAGES);
closedir (IMAGES);

foreach my $image_source_file(@images_to_process_list) {

my $img = new Image::Magick;

my $status=$img->Read("$image_source_folder\$image_source_file");

if ($status eq "") {

my $fmt = $img->Get('format');
print "JPEG $image_source_filen";
my $img->Scale(width=>'30', height=>'30');
my
$img->Write("jpg:$image_dest_folder\$image_source_file");
undef $img;
}
}

Thanks for any help!

Brian


-----Original Message-----
From: Wiggins d'Anconia [mailto:[Email Removed]]
Sent: Thursday, September 09, 2004 6:32 PM
To: Brian Volk
Cc: Beginners (E-mail)
Subject: Re: perl crashing at $image->Read (file=> *ARGV);


Please bottom post....

Brian Volk wrote:
QUOTE
Thank you Wiggins!  I have changed everything that you suggested and I
think
I am much closer.  However, I have run into an error w/ the Read line.

foreach my $file (@files) {
$image->Read (file=> $file)

Bad filehandle: brian.jpg at C:/Program Files/PerlEdit/scripts/test_3.pl
line 17

as you can see the script is seeing the file name in the image directory.
I
re-read chapter 11 Learning Perl on Bad Filehandles but I'm still having
trouble.  Any suggestion would be greatly appreciated.

Thanks!

Brian


Either....

It appears that the docs for I::M are incorrect and that C<Read> and
C<Write> must take a filehandle. Difficult to tell since all the code
is XS/C and I didn't feel like popping the hood on it. You could try
switching back to using a handle but I would be more specific about it,
so for instance, within the foreach you would have:

open my $READHANDLE, $filename or die "Can't open file for reading: $!";
$image->Read('file' => $READHANDLE);

etc.

Or there is an issue with the installation, paths, etc. on Windows. You
should retrieve the actual error message from I::M and see what it says,
similar to,

my $result = $image->Read('file' => $filename);
print $result;

Have you checked out the info at:

http://www.dylanbeattie.net/magick/

It appears to be good info for Win32 specific stuff related to I::M.
Personally I can't even test it here so it is difficult for me to point
you in the right direction. Maybe one of the other M$ users will chime
in ...

http://danconia.org


QUOTE

-----Original Message-----
From: Wiggins d Anconia [mailto:[Email Removed]]
Sent: Thursday, September 09, 2004 12:28 PM
To: Brian Volk; Beginners (E-mail)
Subject: Re: perl crashing at $image->Read (file=> *ARGV);



Hi All,

I my perl script is crashing perl at this line;

$image->Read (file=> *ARGV);

I know that it is this line because I have commented out everything else
around it.  When I just have the Read statment, perl will crash.  Here is
the script, can someone please suggest what I am doing wrong.

Thanks!



----------------------------------------------------------------------------

---------------------
#!/user/local/bin/perl -w

use strict;
use Image::Magick;

my $images = "C:/images";
opendir (IMAGES, $images) or die "can not open $images: $!";

# load @ARGV for (<>)

@ARGV = map { "$images/$_" } grep { !/^./ } readdir IMAGES;



my @files = map { "$images/$_" } grep { !/^./ } readdir IMAGES;

Not sure why you are using @ARGV just for its special qualities (aka the
<> operator) why not name our variables, we are allowed too.


my $image = Image::Magick->new(magick=>'JPEG');

# Read images, Scale down and Write to new directory

while (<>) {


No need to use a while here, since you already have a complete array,

foreach my $file (@files) {


$image->Read (file=> *ARGV)


By naming our variables we now see that we are dealing with a filename,
rather than a typeglob reference.

$image->Read(file => $file)

The example you are using assumes that *ARGV contains an opened
filehandle to the file itself, *but* within the loop you are executing
on each line. See perldoc perlop for more. I would skip using the
special nature of the variables until you understand them. Try using
specifically named variables until the program works, then reduce it if
you must.


and $image->Scale (width=>'50', height=>'50')
and $image->Write ("C:/images")


C<Write> expects a filename argument, not a directory, or a handle.


and close (ARGV);


Not sure why these statements are strung together with C<and> they can
be separate, you haven't really benefited by making them a single
statement. And you wouldn't normally close ARGV.


}

closedir IMAGES;



You can close your dir earlier in the process, since you are done
reading from it.


Brian Volk
HP Products

<mailto:[Email Removed]> [Email Removed]



http://danconia.org



--
To unsubscribe, e-mail: [Email Removed]
For additional commands, e-mail: [Email Removed]
<http://learn.perl.org/> <http://learn.perl.org/first-response>

Bee
QUOTE
- besize template 'aAuU', anything else tempplate I can use to
prepare fix length data ? - if yes, but how do I assuming the
block size is? In case,  if I write a binary file and I wanna
use seek.

You are going to have to explain that in more detail.

In case, I am doing something like a log with User v TimesOfSignIn.
So, user name will set as 30 char long, and the Signin times is about
in scope of a normal integer. I wanna make this a simple DB for this
but not a list of single files for each user. So I wanna make this
doable for ramdom read write, so, that should be in a binary file.

You want to have a look at DBM files. Read
perldoc DB_File
or
perldoc SDBM_File

Or you may try to install DBD::SQLite. That would give you the full
power of SQL without having to install anything but the module.

However, as the file is in a binary, I think there could be a size
benfit for me to compress the data length.

"Premature optimization is the root of all evil."

How many users are we talking about? Millions? I would not sweat over
a few KB you might save.

Yes, that's very right, I should have to do that with some db modules!!! I
am sorry I missed something to declare here . I just do it as an experiment
to know and learn how pack works in this ways.

So, I still looking for a pack way to get this job done. Actaully, I am still
quite confuse on what I can expect pack and unpack can giving help to me,
in furture. I guess I would have some light to know what pack is while this
get done.

Would you help for a little bit more ? Thanks in advise.


PHP Help | Linux Help | Web Hosting | Reseller Hosting | SSL Hosting
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2006 Invision Power Services, Inc.