GithubHelp home page GithubHelp logo

osintstalker's Introduction

Python Scripts

fbStalker - OSINT tool for Facebook - Based on Facebook Graph and other stuff
geoStalker - OSINT tool for Geolocation related sources - Flickr, Instagram, Twitter, Wigle. The userIDs found is used to find social media accounts across other networks like Facebook, Youtube, Instagram, Google+, Linkedin and Google Search

All updates/changes to the code will be posted via twitter account @osintstalker. 
Please follow this account for updates and improvements to the code.

------------------------------------------------------------------------------------------ 
Presentation Slides of our talk at HackInTheBox Kuala Lumpur 2013.
http://conference.hitb.org/hitbsecconf2013kul/materials/D2T3%20-%20Keith%20Lee%20and%20Jonathan%20Werrett%20-%20Facebook%20OSINT.pdf
------------------------------------------------------------------------------------------ 

Videos
Geostalker running in Kali Linux
https://www.youtube.com/watch?v=qUqT9Ct2kg0&feature=youtu.be

------------------------------------------------------------------------------------------ 
Instructions for FBstalker
Install Google Chrome and ChromeDriver on Kali Linux
If you are using 32 bit Kali Linux (run uname -r to find out)
wget http://95.31.35.30/chrome/pool/main/g/google-chrome-stable/google-chrome-stable_27.0.1453.93-r200836_i386.deb

wget https://chromedriver.googlecode.com/files/chromedriver_linux32_23.0.1240.0.zip
unzip chromedriver_linux32_23.0.1240.0.zip
cp chromedriver /usr/bin/chromedriver
chmod 777 /usr/bin/chromedriver

If you are using 64 bit Kali Linux (run uname -r to find out)
wget http://95.31.35.30/chrome/pool/main/g/google-chrome-stable/google-chrome-stable_27.0.1453.93-r200836_amd64.deb

wget https://chromedriver.googlecode.com/files/chromedriver_linux64_23.0.1240.0.zip
unzip chromedriver_linux64_23.0.1240.0.zip
cp chromedriver /usr/bin/chromedriver
chmod 777 /usr/bin/chromedriver

sudo apt-get install python-setuptools
wget https://pypi.python.org/packages/source/p/pip/pip-1.4.1.tar.gz
tar xvfz pip-1.4.1.tar.gz
cd pip-1.4.1
sudo python2.7 setup.py install

pip install pytz
pip install tzlocal
pip install termcolor
pip install selenium
pip install requests --upgrade
pip install beautifulsoup4 

git clone https://github.com/hadim/pygraphml.git
cd pygraphml
python2.7 setup.py install

Edit fbstalker.py and update facebook_username (same as email address) and facebook_password.

Run python fbstalker.py -user [facebook target username]

------------------------------------------------------------------------------------------ 
Instructions for GEOstalker
For geoStalker, you will need to supply a bunch of API keys and access_tokens for now. The script will be updated later with an easier method on gettingsud the access_tokens.

Dependencies for Geostalker 
[Tested on Kali Linux]

Install Google Chrome and ChromeDriver on Kali Linux
wget https://chromedriver.googlecode.com/files/chromedriver_linux64_23.0.1240.0.zip
cp chromedriver /usr/bin/chromedriver
chmod 777 /usr/bin/chromedriver

If you are using 32 bit Kali Linux (run uname -r to find out)
wget http://95.31.35.30/chrome/pool/main/g/google-chrome-stable/google-chrome-stable_27.0.1453.93-r200836_i386.deb

If you are using 64 bit Kali Linux (run uname -r to find out)
wget http://95.31.35.30/chrome/pool/main/g/google-chrome-stable/google-chrome-stable_27.0.1453.93-r200836_amd64.deb

Python2.7
sudo apt-get install python-setuptools

wget https://pypi.python.org/packages/source/p/pip/pip-1.4.1.tar.gz
tar xvfz pip-1.4.1.tar.gz
cd pip-1.4.1
sudo python2.7 setup.py install

git clone https://github.com/hadim/pygraphml.git
cd pygraphml
python2.7 setup.py install

pip-2.7 install google
pip-2.7 install python-instagram
pip-2.7 install pygoogle
pip-2.7 install geopy
pip-2.7 install lxml
pip-2.7 install oauth2
pip-2.7 install python-linkedin
pip-2.7 install pygeocoder
pip-2.7 install selenium
pip-2.7 install termcolor
pip-2.7 install pysqlite
pip-2.7 install TwitterSearch
pip-2.7 install foursquare


wget https://gdata-python-client.googlecode.com/files/gdata-2.0.18.tar.gz
tar xvfz gdata-2.0.18.tar.gz
cd gdata-2.0.18
python2.7 setup.py install

------------------------------------------------------------------------------------------ 
********************************* HOW TO USE geoStalker **********************************
1.	Fill in the API keys and access_tokens in geostalker.py 
2.	If an API key or access_token is missing, it will skip that particular social network API lookup/search
3.	Run 'sudo python2.7 geostalker.py'
4.	Input a physical address or GPS coordinates


$  sudo python2.6 geostalker.py 
MMMMMM$ZMMMMMDIMMMMMMMMNIMMMMMMIDMMMMMMM
MMMMMMNINMMMMDINMMMMMMMZIMMMMMZIMMMMMMMM
MMMMMMMIIMMMMMI$MMMMMMMIIMMMM8I$MMMMMMMM
MMMMMMMMIINMMMIIMMMMMMNIIMMMOIIMMMMMMMMM
MMMMMMMMOIIIMM$I$MMMMNII8MNIIINMMMMMMMMM
MMMMMMMMMZIIIZMIIIMMMIIIM7IIIDMMMMMMMMMM
MMMMMMMMMMDIIIIIIIZMIIIIIII$MMMMMMMMMMMM
MMMMMMMMMMMM8IIIIIIZIIIIIIMMMMMMMMMMMMMM
MMMMMMMMMMMNIIIIIIIIIIIIIIIMMMMMMMMMMMMM
MMMMMMMMM$IIIIIIIIIIIIIIIIIII8MMMMMMMMMM
MMMMMMMMIIIIIZIIIIZMIIIIIDIIIIIMMMMMMMMM
MMMMMMOIIIDMDIIIIZMMMIIIIIMMOIIINMMMMMMM
MMMMMNIIIMMMIIII8MMMMM$IIIZMMDIIIMMMMMMM
MMMMIIIZMMM8IIIZMMMMMMMIIIIMMMM7IIZMMMMM
MMM$IIMMMMOIIIIMMMMMMMMMIIIIMMMM8IIDMMMM
MMDIZMMMMMIIIIMMMMMMMMMMNIII7MMMMNIIMMMM
MMIOMMMMMNIII8MMMMMMMMMMM7IIIMMMMMM77MMM
MO$MMMMMM7IIIMMMMMMMMMMMMMIII8MMMMMMIMMM
MIMMMMMMMIIIDMMMMMMMMMMMMM$II7MMMMMMM7MM
MMMMMMMMMIIIMMMMMMMMMMMMMMMIIIMMMMMMMDMM
MMMMMMMMMII$MMMMMMMMMMMMMMMIIIMMMMMMMMMM
MMMMMMMMNIINMMMMMMMMMMMMMMMOIIMMMMMMMMMM
MMMMMMMMNIOMMMMMMMMMMMMMMMMM7IMMMMMMMMMM
MMMMMMMMNINMMMMMMMMMMMMMMMMMZIMMMMMMMMMM
MMMMMMMMMIMMMMMMMMMMMMMMMMMM8IMMMMMMMMMM

**********************************************************
****** GeoStalker Version 1.0 HackInTheBox Release ******
**********************************************************

Please enter an address or GPS coordinates (e.g. 4.237588,101.131332):______________

osintstalker's People

Contributors

milo2012 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

osintstalker's Issues

IndexError

Yesterday I had no issues with the script, but today on a different machine and the same machine I get the following error:

[*] Caching Pages Liked By:
Traceback (most recent call last):
File "FBStalker.py", line 2030, in
options(sys.argv)
File "FBStalker.py", line 1974, in options
mainProcess(user)
File "FBStalker.py", line 1790, in mainProcess
dataList = parsePagesLiked(html)
File "FBStalker.py", line 1146, in parsePagesLiked
pageCategory[count]
IndexError: list index out of range

2 questions/requests

Is there a way to limit the radius for which networks, tweets and images are shown?

Is there way limit the date range?
e.g. show me only tweets in a given area for the last 24hr?

Geostalker - several API's seem to break the results.html to no longer have twitter map provided

I have Google info filled out, Instagram API, Foursquare API, LinkedIn API, Twitter API, and Wiggle username/password provided.

If I only have my Google and Twitter info there, I am able to run this and get a map with all the people's tweets and it is really nice. When i enable all the other API's I no longer get the map, but I get a lot of Tweets at the bottom with Geo location on it, just the map seems to break. Does anyone else have this issue?

If Wiggle user info is put in at all, geostalker.py breaks with the following error:
[] Downloading Wigle database from Internet
[
] Saving wigle database to: 39.6596737_-105.0103375.dat
[] Converting Wigle database to KML format.
[
] Convert Wigle Database to KML format
[] Convert wigle database to: 39.6596737-105.0103375.kml
[] Wigle database already exists: 39.6596737-105.0103375.dat
[] Checking Google Docs if File Exists
[
] Logging in...
[!] Unknown Error: Incorrect username or password

Not sure why this is generating a file, then trying to find it in Google Docs...its weird. Any insight would be helpful.

Thanks for your time,

Crash on UnicodeEncodeError

Is there a way to get around FBStalker crashing on users with unicode characters? The script is running fine for me on Kali Linux but crashes on UnicodeEncodeError.

Also, it took a while for me to figure out that I need to use a 64-bit Chromedriver to get this beast running. Maby you could include the link to that one in the installation instructions.

Thanks
Martin

Traceback (most recent call last):
  File "fbstalker1.py", line 2030, in <module>
    options(sys.argv)
  File "fbstalker1.py", line 1974, in options
    mainProcess(user)
  File "fbstalker1.py", line 1913, in mainProcess
    dataList = parseTimeline(html,username)
  File "fbstalker1.py", line 522, in parseTimeline
    print "[*] Location of Post: "+str(tlDateTimeLoc[1].text)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 1: ordinal not in range(128)

No module named GraphMLParser

File "fbstalker1.py", line 18, in
from pygraphml.GraphMLParser import *
ImportError: No module named GraphMLParser

After installing all the pre-reqs I run it and get this. I have worked around it by removing the .GraphMLParser portion, but I'm not sure if I am breaking the application by doing so. Am I missing something or is this a real issue?

division by zero error

python ../fbstalker.py -user joey.zhi -report report.txt
[...]

Traceback (most recent call last):
File "../fbstalker.py", line 2001, in
options(sys.argv)
File "../fbstalker.py", line 1948, in options
mainProcess(user)
File "../fbstalker.py", line 1887, in mainProcess
dataList = parseTimeline(html,username)
File "../fbstalker.py", line 776, in parseTimeline
reportFile.write("Total % (00:00 to 03:00) "+str((timeSlot1/totalLen)*100)+" %\n")
ZeroDivisionError: division by zero

Failing to right to DB

So I have this script working again using the indexing error a few posts down. Now the script runs start to finish, but doesn't seem to parse the saved html files using beautifulsoup properly. I am receiving a list index out of error when trying to write to the DB:

Writing 0 records to table: photosLiked
[*] Writing 0 record(s) to database table: photosLiked
list index out of range

This is consistent for all categories. I am assuming that the way the file is getting parsed is no longer meeting the list lengths specified in the code, but I'm not sure how to validate my assumption.

Here is the code section defining the columns:

"""def write2Database(dbName,dataList):
try:
cprint("[] Writing "+str(len(dataList))+" record(s) to database table: "+dbName,"white")
#print "[
] Writing "+str(len(dataList))+" record(s) to database table: "+dbName
numOfColumns = len(dataList[0])
c = conn.cursor()
if numOfColumns==3:
for i in dataList:
try:
c.execute('INSERT INTO '+dbName+' VALUES (?,?,?)', i)
conn.commit()
except sqlite3.IntegrityError:
continue
if numOfColumns==4:
for i in dataList:
try:
c.execute('INSERT INTO '+dbName+' VALUES (?,?,?,?)', i)
conn.commit()
except sqlite3.IntegrityError:
continue
if numOfColumns==5:
for i in dataList:
try:
c.execute('INSERT INTO '+dbName+' VALUES (?,?,?,?,?)', i)
conn.commit()
except sqlite3.IntegrityError:
continue
if numOfColumns==9:
for i in dataList:
try:
c.execute('INSERT INTO '+dbName+' VALUES (?,?,?,?,?,?,?,?,?)', i)
conn.commit()
except sqlite3.IntegrityError:
continue
except TypeError as e:
print e
pass
except IndexError as e:
print e
pass"""

Example of the parsing functions:

"""def parsePhotosOf(html):
soup = BeautifulSoup(html)
photoPageLink = soup.findAll("a", {"class" : "23q"})
tempList = []
for i in photoPageLink:
html = str(i)
soup1 = BeautifulSoup(html)
pageName = soup1.findAll("img", {"class" : "img"})
pageName1 = soup1.findAll("img", {"class" : "scaledImageFitWidth img"})
pageName2 = soup1.findAll("img", {"class" : "46-i img"})
for z in pageName2:
if z['src'].endswith('.jpg'):
url1 = i['href']
r = re.compile('fbid=(.*?)&set=bc')
m = r.search(url1)
if m:
filename = 'fbid
'+ m.group(1)+'.html'
filename = filename.replace("profile.php?id=","")
if not os.path.lexists(filename):
#html1 = downloadPage(url1)
html1 = downloadFile(url1)
print "[
] Caching Photo Page: "+m.group(1)
text_file = open(filename, "w")
text_file.write(normalize(html1))
text_file.close()
else:
html1 = open(filename, 'r').read()
soup2 = BeautifulSoup(html1)
username2 = soup2.find("div", {"class" : "fbPhotoContributorName"})
r = re.compile('a href="(.?)"')
m = r.search(str(username2))
if m:
username3 = m.group(1)
username3 = username3.replace("https://www.facebook.com/","")
username3 = username3.replace("profile.php?id=","")
print "[
] Extracting Data from Photo Page: "+username3
tempList.append([str(uid),z['alt'],z['src'],i['href'],username3])
for y in pageName1:
if y['src'].endswith('.jpg'):
url1 = i['href']
r = re.compile('fbid=(.?)&set=bc')
m = r.search(url1)
if m:
filename = 'fbid
'+ m.group(1)+'.html'
filename = filename.replace("profile.php?id=","")
if not os.path.lexists(filename):
#html1 = downloadPage(url1)
html1 = downloadFile(url1)
print "[] Caching Photo Page: "+m.group(1)
text_file = open(filename, "w")
text_file.write(normalize(html1))
text_file.close()
else:
html1 = open(filename, 'r').read()
soup2 = BeautifulSoup(html1)
username2 = soup2.find("div", {"class" : "fbPhotoContributorName"})
r = re.compile('a href="(.
?)"')
m = r.search(str(username2))
if m:
username3 = m.group(1)
username3 = username3.replace("https://www.facebook.com/","")
username3 = username3.replace("profile.php?id=","")
print "[] Extracting Data from Photo Page: "+username3
tempList.append([str(uid),y['alt'],y['src'],i['href'],username3])
for x in pageName:
if x['src'].endswith('.jpg'):
url1 = i['href']
r = re.compile('fbid=(.
?)&set=bc')
m = r.search(url1)
if m:
filename = 'fbid_'+ m.group(1)+'.html'
filename = filename.replace("profile.php?id=","")
if not os.path.lexists(filename):
#html1 = downloadPage(url1)
html1 = downloadFile(url1)
print "[] Caching Photo Page: "+m.group(1)
text_file = open(filename, "w")
text_file.write(normalize(html1))
text_file.close()
else:
html1 = open(filename, 'r').read()
soup2 = BeautifulSoup(html1)
username2 = soup2.find("div", {"class" : "fbPhotoContributorName"})
r = re.compile('a href="(.
?)"')
m = r.search(str(username2))
if m:
username3 = m.group(1)
username3 = username3.replace("https://www.facebook.com/","")
username3 = username3.replace("profile.php?id=","")
print "[*] Extracting Data from Photo Page: "+username3
tempList.append([str(uid),x['alt'],x['src'],i['href'],username3])"""

Error with webdriver.py

Error:

[*] Username:
Traceback (most recent call last):
File "fbstalker1.py", line 2030, in
options(sys.argv)
File "fbstalker1.py", line 1974, in options
mainProcess(user)
File "fbstalker1.py", line 1748, in mainProcess
loginFacebook(driver)
File "fbstalker1.py", line 324, in loginFacebook
driver.implicitly_wait(120)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 638, in implicitly_wait
self.execute(Command.IMPLICIT_WAIT, {'ms': float(time_to_wait) * 1000})
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 164, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 164, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: u"Unknown command 'WaitForAllTabsToStopLoading'. Options: AcceptOrDismissAppModalDialog, ActionOnSSLBlockingPage, ActivateTab, AddBookmark, AddDomEventObserver, AppendTab, ApplyAccelerator, BringBrowserToFront, ClearEventQueue, CloseBrowserWindow, CloseTab, CreateNewAutomationProvider, DeleteCookie, DeleteCookieInBrowserContext, DoesAutomationObjectExist, DragAndDropFilePaths, ExecuteJavascript, ExecuteJavascriptInRenderView, GetActiveTabIndex, GetAppModalDialogMessage, GetBookmarkBarStatus, GetBookmarksAsJSON, GetBrowserInfo, GetBrowserWindowCount, GetChromeDriverAutomationVersion, GetCookies, GetCookiesInBrowserContext, GetDownloadDirectory, GetExtensionsInfo, GetIndicesFromTab, GetLocalStatePrefsInfo, GetMultiProfileInfo, GetNextEvent, GetPrefsInfo, GetProcessInfo, GetSecurityState, GetTabCount, GetTabIds, GetTabInfo, GetViews, GoBack, GoForward, InstallExtension, IsDownloadShelfVisible, IsFindInPageVisible, IsMenuCommandEnabled, IsPageActionVisible, IsTabIdValid, MaximizeView, NavigateToURL, OpenFindInPage, OpenNewBrowserWindow, OpenNewBrowserWindowWithNewProfile, OpenProfileWindow, OverrideGeoposition, RefreshPolicies, Reload, RemoveBookmark, RemoveEventObserver, ReparentBookmark, RunCommand, SendWebkitKeyEvent, SetBookmarkTitle, SetBookmarkURL, SetCookie, SetCookieInBrowserContext, SetDownloadShelfVisible, SetExtensionStateById, SetLocalStatePrefs, SetPrefs, SetViewBounds, SimulateAsanMemoryBug, TriggerBrowserActionById, TriggerPageActionById, UninstallExtensionById, UpdateExtensionsNow, WaitForBookmarkModelToLoad, WaitUntilNavigationCompletes, WebkitMouseButtonDown, WebkitMouseButtonUp, WebkitMouseClick, WebkitMouseDoubleClick, WebkitMouseDrag, WebkitMouseMove, AcceptCurrentFullscreenOrMouseLockRequest, AddOrEditSearchEngine, AddSavedPassword, CloseNotification, DenyCurrentFullscreenOrMouseLockRequest, DisablePlugin, EnablePlugin, FindInPage, GetAllNotifications, GetDownloadsInfo, GetFPS, GetHistoryInfo, GetInitialLoadTimes, GetNTPInfo, GetNavigationInfo, GetOmniboxInfo, GetPluginsInfo, GetSavedPasswords, GetSearchEngineInfo, GetV8HeapStats, IsFullscreenBubbleDisplayed, IsFullscreenBubbleDisplayingButtons, IsFullscreenForBrowser, IsFullscreenForTab, IsFullscreenPermissionRequested, IsMouseLockPermissionRequested, IsMouseLocked, KillRendererProcess, LaunchApp, LoadSearchEngineInfo, OmniboxAcceptInput, OmniboxMovePopupSelection, PerformActionOnDownload, PerformActionOnInfobar, PerformActionOnSearchEngine, RemoveNTPMostVisitedThumbnail, RemoveSavedPassword, RestoreAllNTPMostVisitedThumbnails, SaveTabContents, SetAppLaunchType, SetOmniboxText, SetWindowDimensions, WaitForAllDownloadsToComplete, WaitForNotificationCount, "

so close to run, maybe...

Changed links for chrome, installation seems OK but after: python fbstalker1.py -user blabla
this is what I get

Traceback (most recent call last):
File "fbstalker.py", line 18, in
from pygraphml.GraphMLParser import *
ImportError: No module named GraphMLParser

When droped line, it has same error on next (Graph, Node, Edge)

Any idea?

Error when running tool

root@Kali:/Scripts/osintstalker-master# python fbstalker1.py
Traceback (most recent call last):
File "fbstalker1.py", line 60, in
driver = webdriver.Chrome(chrome_options=chromeOptions)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 64, in init
desired_capabilities=desired_capabilities)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 72, in init
self.start_session(desired_capabilities, browser_profile)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 114, in start_session
'desiredCapabilities': desired_capabilities,
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 165, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 158, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: u'Unable to either launch or connect to Chrome. Please check that ChromeDriver is up-to-date. Using Chrome binary at: /usr/bin/google-chrome'
root@Kali:
/Scripts/osintstalker-master#

Chrome also pops up with an about:blank tab. Followed instructions in readme. Running Kali.

Index error

After the script collects all of the pages like by a user it crashes with the following error.

Traceback (most recent call last):
File "fbstalker2.py", line 2123, in
options(sys.argv)
File "fbstalker2.py", line 2066, in options
mainProcess(user)
File "fbstalker2.py", line 1883, in mainProcess
dataList = parsePagesLiked(html)
File "fbstalker2.py", line 1223, in parsePagesLiked
pageCategory[count]
IndexError: list index out of range

I can't get this to run for the life of me.

Kali linux wasn't happening so I thought I'd try an xp vm.

Traceback (most recent call last):
File "E:\osintstalker\fbstalker.py", line 60, in
driver = webdriver.Chrome(chrome_options=chromeOptions)
File "C:\Python26\lib\site-packages\selenium\webdriver\chrome\webdriver.py", l
ine 64, in init
desired_capabilities=desired_capabilities)
File "C:\Python26\lib\site-packages\selenium\webdriver\remote\webdriver.py", l
ine 71, in init
self.start_session(desired_capabilities, browser_profile)
File "C:\Python26\lib\site-packages\selenium\webdriver\remote\webdriver.py", l
ine 113, in start_session
'desiredCapabilities': desired_capabilities,
File "C:\Python26\lib\site-packages\selenium\webdriver\remote\webdriver.py", l
ine 164, in execute
self.error_handler.check_response(response)
File "C:\Python26\lib\site-packages\selenium\webdriver\remote\errorhandler.py"
, line 164, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: u'chrome not reachable\n
(Driver info: chromedriver=2.4.226107,platform=Windows NT 5.1 SP3 x86)'

selenium-server-standalone-2.37.0.jar is up and running but nothing gets sent to it.

What am I supposed to put what here? Maybe what I have is wrong.

facebook_access_token="" < Where's the access token? Which cookie? datr I assumed but who knows.

facebook_username = "" < You want account name or will email do?
facebook_password = ""

global uid
uid = "" < What's this? the numeric account name?
username = "" < My account name again?
internetAccess = True
all_cookies = {}
reportFileName = ""

Oh yeah setuptools isn't going to work for python2.6 in linux as it's missing the sha256 module.

IndexError after "Caching Pages Liked By: <User>"

Was able to fight thorugh the last issue I had and now am getting to the Caching Pages Liked and errors to this:

Traceback (most recent call last):
File "fbstalker1.py", line 2030, in
options(sys.argv)
File "fbstalker1.py", line 1974, in options
mainProcess(user)
File "fbstalker1.py", line 1790, in mainProcess
dataList = parsePagesLiked(html)
File "fbstalker1.py", line 1146, in parsePagesLiked
pageCategory[count]
IndexError: list index out of range

Inconsistent use of tabs and spaces in indentation

I'm totally new to Python, tried the version 3.3.3 and then pressed "RUN" while in IDLE. Ever got the same error message, even since i tried other versions and installing KALI linux and stuff...I'm a total noob regarding Python and the get-arounds.

Can someone please explain stuff for extreme noobs? (not on programming)

THANKS A LOT!

plz help

[*] Downloading Wigle database from Internet
Traceback (most recent call last):
File "geostalker.py", line 1652, in
options(sys.argv)
File "geostalker.py", line 1645, in options
mainProcess(location)
File "geostalker.py", line 1617, in mainProcess
geoLocationSearch(lat,lng)
File "geostalker.py", line 1457, in geoLocationSearch
html1 = downloadWigle(lat,lng,wigle_cookie)
File "geostalker.py", line 662, in downloadWigle
headers = {'Cookie': resp['set-cookie']}
KeyError: 'set-cookie'

ValueError: too many values to unpack

Crashes with "ValueError: too many values to unpack" when going through all the downloaded data in the end if a user has 'too many' friends or is found in too many photos (I think?).

There are also some problems with encoding when it parses the timeline. Changing str() calls to unicode() worked for me.

No Maltego file created

I'm having problems with fbstalker.py. I've tried the script on my own fb-profile with success! But when I'm trying to stalk the real life object I'm intrested in the script skips creating the maltego file with no errors. In terminal I get the following:

[*] Time of Post: 2011-12-08 01:31:33
[*] Time of Post: 2011-09-28 21:26:38
[*] Time of Post: 2011-09-20 20:00:05
[*] Time of Post: 2011-09-20 16:57:20
[*] Time of Post: 2011-07-11 05:13:47
[*] Time of Post: 2011-06-07 20:53:35
[*] Time of Post: 2010-05-27 00:09:09








[*] Downloading User Information
[*] Report has been written to: XXXX_report.txt
[*] Preparing Maltego output...
[*] Maltego file has been created

There is no Maltego file or graph directory created, and since there is no errors thrown I don't know what's wrong.
Any ideas?
Thanks
Martin

[Errno 101] Network is unreachable

[*] Username:
Traceback (most recent call last):
File "fbstalker1.py", line 2030, in
options(sys.argv)
File "fbstalker1.py", line 1974, in options
mainProcess(user)
File "fbstalker1.py", line 1749, in mainProcess
uid = convertUser2ID2(driver,username)
File "fbstalker1.py", line 289, in convertUser2ID2
resp, content = h.request(url, "GET")
File "/usr/local/lib/python2.7/dist-packages/httplib2/init.py", line 1570, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/local/lib/python2.7/dist-packages/httplib2/init.py", line 1317, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/local/lib/python2.7/dist-packages/httplib2/init.py", line 1290, in _conn_request
conn.connect()
File "/usr/local/lib/python2.7/dist-packages/httplib2/init.py", line 913, in connect
raise socket.error, msg
socket.error: [Errno 101] Network is unreachable

[!] Problem converting username to uid

Hi,
I managed to launch the script after adapting some modules, but I continue to fall on this error:
"[!] Problem converting username to uid", I don't find the solution, could you help me please ?

Error with some profiles

I occasionally get this error with certain profiles:

File "fbstalker1.py", line 2030, in
options(sys.argv)
File "fbstalker1.py", line 1974, in options
mainProcess(user)
File "fbstalker1.py", line 1871, in mainProcess
dataList = parseFriends(html)
File "fbstalker1.py", line 1659, in parseFriends
month,year = value.split(" ")
ValueError: too many values to unpack

cheers

Cannot Connect to Chrome Driver

I am running Kali 64bit in a VM. When I try to execute: python fbstalker1.py xxx

I get the following error....

Traceback (most recent call last):
File "fbstalker1.py", line 59, in
driver = webdriver.Chrome(chrome_options=chromeOptions)

File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 59, in init self.service.start()

File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/service.py", line 74, in start

raise WebDriverException("Can not connect to the ChromeDriver")
selenium.common.exceptions.WebDriverException: Message: 'Can not connect to the ChromeDriver'

Since I am running 64 bit I installed the 64 bit Chrome browser and the 64 bit ChromeDriver. Your instructions only mention the 32 bit ChromeDriver. Any suggestions?

httplib2 no module

Hi when i try to run it gives me this error:

➜  osintstalker git:(master) python fbstalker.py -u XXXX -report report.txt
Traceback (most recent call last):
  File "fbstalker.py", line 3, in <module>
    import httplib2,json
ImportError: No module named httplib2

But i got httplib2 installed:

➜  ~  pip install httplib2
Requirement already satisfied (use --upgrade to upgrade): httplib2 in /usr/local/lib/python2.7/site-packages
Cleaning up...

Any guess?

Error: Traceback (most recent call last) ...

After auto-opening the page Facebook of login, I'have this error:
File "fbstalker1.py", line 2030, in
options(sys.argv)
File "fbstalker1.py", line 1748, in mainProcess
loginFacebook(driver)
File "fbstalker1.py", line 326, in loginFacebook
assert "Welcome to Facebook" in driver.title
AssertionError

Because?

geostalker.py - IndexError: list index out of range

Please enter an address or GPS coordinates (e.g. 4.237588,101.131332): 4.237588,101.131332
[] Converting address to GPS coordinates: 4.237588 101.131332
[
] Downloading Wigle database from Internet
[] Saving wigle database to: 4.237588_101.131332.dat
[
] Converting Wigle database to KML format.
[] Convert Wigle Database to KML format
[
] Convert wigle database to: 4.237588_101.131332.kml
[] Wigle database already exists: 4.237588_101.131332.dat
[
] Checking Google Docs if File Exists
[] Logging in... Login success!
[
] File does not exists!
[] Uploading: 4.237588_101.131332.kml to Google Docs!
[
] Logging in... Login success!
Fetching collection ID... success!
Starting uploading of file... Upload success!
[] Change: 4.237588_101.131332.kml Access to Public
[
] Logging in... Login success!
owner user [email protected]
Permissions change success!
[] Extracting MAC addresses from Wigle database: 4.237588_101.131332.dat
[
] Retrieving match for vendor name: Function ATI (Huizhou) Telecommunications Co., Ltd.
[] Retrieving match for vendor name: ADVANCED MICRO DEVICES
[
] Retrieving match for vendor name: INFORMATION TECHNOLOGY LIMITED
[] va corresponds to ADVANCED MICRO DEVICES
[
] fo corresponds to INFORMATION TECHNOLOGY LIMITED
[*] functi corresponds to Function ATI (Huizhou) Telecommunications Co., Ltd.
Traceback (most recent call last):
File "geostalker.py", line 1328, in
geoLocationSearch(lat,lng)
File "geostalker.py", line 1236, in geoLocationSearch
parseWigleDat(filename)
File "geostalker.py", line 576, in parseWigleDat
locLat = i.split('~')[11]
IndexError: list index out of range

problème installation

Bonjour,
comment installer google chrome et Chrome driver... Cela ne fonctionne pas à cette adresse.
Je veux installer sur KALI. Je suis nouvel utilisateur de kali et donc ne connait pas bien l'environnement.
Merci

Says chromedriver isn't in the path..

I downloaded and installed chromedriver to /usr/bin as well as the current path. I also ran chromedriver in the background just in case it was having trouble starting it or the like but I still get this

Any idea?

Traceback (most recent call last):
File "fbstalker.py", line 61, in
driver = webdriver.Chrome(chrome_options=chromeOptions)
File "/usr/lib/python2.7/site-packages/selenium-2.37.0-py2.7.egg/selenium/webdriver/chrome/webdriver.py", line 59, in init
self.service.start()
File "/usr/lib/python2.7/site-packages/selenium-2.37.0-py2.7.egg/selenium/webdriver/chrome/service.py", line 68, in start
and read up at http://code.google.com/p/selenium/wiki/ChromeDriver")
selenium.common.exceptions.WebDriverException: Message: 'ChromeDriver executable needs to be available in the path. Please download from http://code.google.com/p/chromedriver/downloads/list and read up at http://code.google.com/p/selenium/wiki/ChromeDriver'

I solved 2 bugs but cannot commit

So I'll put my code here
1st bug: alert from facebook to authorize notifications
add the following lines in #Chrome Options line 50

prefs = {"profile.default_content_setting_values.notifications":2}
chromeOptions.add_experimental_option("prefs",prefs)

2nd bug: pygraphml import, replace lines 17-21 by:

from pygraphml import GraphMLParser
from pygraphml import Graph
from pygraphml import Node
from pygraphml import Edge

3rd bug: update name of div for friends: replace corresponding lines by

        friendBlockData = soup.findAll("div",{"class" : "_3ul._gli.5_und"})
        friendNameData = soup.findAll("div", {"class" : "_5d-5"})

selenium / webdriver issue

Hi all,
May i have some help please ? there seems to be a problem with selenium and webserver but afer hours and days of research i still not able to fix...
here is what i got :
Traceback (most recent call last):
File "fbstalker1.py", line 59, in
driver = webdriver.Chrome(chrome_options=chromeOptions)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 75, in init
desired_capabilities=desired_capabilities)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 154, in init
self.start_session(desired_capabilities, browser_profile)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 243, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.35.528139 (47ead77cb35ad2a9a83248b292151462a66cd881),platform=Linux 4.14.0-kali3-amd64 x86_64)

Thanks for your help

Crash after loading additional "likes"

I'm using windows for clarification, familiar with Java programming, not so much with python. Anyways the program loads a few likes, then crashes after waiting for a loading screen.
capture1

chrome drivers error

root:~/osintstalker# python fbstalker1.py
Traceback (most recent call last):
File "fbstalker1.py", line 59, in
driver = webdriver.Chrome(chrome_options=chromeOptions)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 61, in init
self.service.start()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 69, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home

Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.del of <selenium.webdriver.chrome.service.Service object at 0xb5d4908c>> ignored

i tried to put chromedriver executable in path and every other way but still not working.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.