[SOLVED] Creating a script

I have set myself a task, I want to create a script to read a text file that contains web addresses and download the file that the address points to and then loop round until it has downloaded the last one.

I’m not sure how to do it so I am reading a few books on the subject.

As a start, I’ll try and create one to read the file and display each line and wait for a keypress before getting the next one.

I did a bit of searching online too. I didnt realise how easy it was.

while read LINE; do echo "$LINE"; done < ~/Peer_Bloclist.txt

Prints the web addresses onscreen.

 while read LINE; do wget $LINE; done < ~/Peer_Blocklist.txt

Works :slight_smile:
But what’s the command to make it wait a few seconds and then continue?

sleep 5
or
sleep 10
etc.

maybe ?

done it. I put in sleep 3 and its working OK

I cant believe how simple and small it is compared to what I thought it’d be.

The reason behind it is, It took ages to download each one manually. So I wanted to automate it.
I’ve included the list.

The problem is, understanding the logic
e.g.

for file
do
    pr file > $file.tmp
done

The ‘for’ ; what exactly does it do to file, or to that matter any other variable to know how many times to loop?

Look at this:
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html

and these bash "for loop"examples
http://www.cyberciti.biz/faq/bash-for-loop/

Though I’m no programmer it appears to just define what to loop.

ta Mark :slight_smile:
Should help me.

Far out, the power of Bash amazes me everytime I see it - in my head, I was already grabbing for a Python script!
(probably just me there ::))

Me too Chem, I was expecting to be writing a large script with loads of commands but the simplicity amazes me.
And, talking of Python, I’m trying to learn that too. When I was on Window$, I was wanting to learn a programming language but all I found was languages such as C+ and Visual Basic that costs hundreds of pounds - I didn’t know of the free ones around as you expect everything to cost. Since moving to Ubuntu, it has really opened my eyes.

I have set myself a project in Python, when I can fully understand the commands. I am going to create an address book.

Is there a way I can save the file to another directory.
Also, to make it more of a challenge, the script downloads, I think they are ‘tar.gz’ files - can I unpack the file to another directory;
~/.config/transmission/blocklists

I’ve had a go myself…
Will this work?

while read LINE

Change directory to where the file is to be downloaded

cd ~/Downloads/Blocklists

Download the file

do wget $LINE

Wait 3 seconds

sleep 3

Unpack the file to Transmissions Blocklist folder

tar -xc *.tar ~/.config/transmission/blocklists
done < ~/Peer_Blocklist.txt

Dunno

One thing I do know though is the tar options -x and -c are “extract” and “create” … so you don’t want “-xc”

I think you’re more after something like:-

for i in `ls -1 *.tar`; do tar -xf $i -C $HOME/.config/transmission/blocklists; done

Which should unpack all .tar files in the current directory to ~/.config/transmission/blocklists

So, it unpacks the file and places it in the blocklists folder.
I an wanting to do additional tasks, such as rename it and include a number sequentially from 1 upwards.
Can I use the ‘i’ in the for and add it to a new name such as filename_$i.txt as the unpacked file ends with .unarchived

Bit more info on how I want to do it;
download the file
unpack it
rename it
move it
delete the archive

There is only ONE file in the archive.

I don’t understand the use of -f

tar -xf $i -C $HOME/.config/transmission/blocklists

Do not attempt to do it all in one go as debugging could be a nightmare.
Break it down to smaller (more manageable) chunks.

My approach would be to download all the files first…

#!/bin/bash

#-P = Directory where all files will be saved to
#-w = Wait the specified number of seconds between the retrievals.
#-i = Read URLs from input file

#set variables
Wget_dir="$HOME/tmp/tar/blocklist"
Untar_dir="$HOME/.config/transmission/blocklists"
export FILE_NUMBER=0
 
mkdir $Wget_dir
#mkdir $Untar_dir

wget --trust-server-names -P $Wget_dir -w 3 -i ~/Peer_Blocklist.txt

#Then loop trough the downloaded files

for file in $Wget_dir/*
do
   ((FILE_NUMBER++))
   echo "$file"
   echo $FILE_NUMBER
   #tar -xzvf "$file" -C $Untar_dir
    STEM=$(basename "${file}" .gz)
    gunzip -c "${file}" > $Untar_dir/"${STEM}"-$FILE_NUMBER.txt
done

Once you are happy that it works you could delete the unneeded folders.

Just going to hide away for a couple of days while I try to understand the commands :wink:

SeZo, I don’t get putting ‘~/Peer_Blocklist.txt’. I might not understand it properly but it looks to me like it’s getting the archive and saving it under the name ‘Peer_Blocklist.txt’

wget --trust-server-names -P $Wget_dir -w 3 -i ~/Peer_Blocklist.txt

I’ve looked at the commands with ‘man’ but still can’t figure it!