exam questions

Exam 312-50v11 All Questions

View all questions & answers for the 312-50v11 exam

Exam 312-50v11 topic 1 question 157 discussion

Actual exam question from ECCouncil's 312-50v11
Question #: 157
Topic #: 1
[All 312-50v11 Questions]

Which file is a rich target to discover the structure of a website during web-server footprinting?

  • A. domain.txt
  • B. Robots.txt
  • C. Document root
  • D. index.html
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
blacksheep6r
Highly Voted 1 year, 6 months ago
Information Gathering from Robots.txt File A website owner creates a robots.txt file to list the files or directories a web crawler should index for providing search results. Poorly written robots.txt files can cause the complete indexing of website files and directories. If confidential files and directories are indexed, an attacker may easily obtain information such as passwords, email addresses, hidden links, and membership areas. If the owner of the target website writes the robots.txt file without allowing the indexing of restricted pages for providing search results, an attacker can still view the robots.txt file of the site to discover restricted files and then view them to gather information. An attacker types URL/robots.txt in the address bar of a browser to view the target website’s robots.txt file. An attacker can also download the robots.txt file of a target website using the Wget tool. Certified Ethical Hacker(CEH) Version 11 pg 1650
upvoted 13 times
...
tille
Highly Voted 1 year, 11 months ago
I would go with robots.txt. The question asks a file and with the content of the robots.txt the hacker can found directories which should be not visible.
upvoted 10 times
...
Daniel8660
Most Recent 6 months, 3 weeks ago
Selected Answer: B
Web Server Attack Methodolog Information Gathering from Robots.txt File The robots.txt file contains the list of the web server directories and files that the web site owner wants to hide from web crawlers. Poorly written robots.txt files can cause the complete indexing of website files and directories. If confidential files and directories are indexed, an attacker may easily obtain information such as passwords, email addresses, hidden links, and membership areas. (P.1650/1634)
upvoted 3 times
...
disil98445
11 months ago
Selected Answer: B
robots.txt
upvoted 1 times
...
volatile
11 months ago
Selected Answer: B
The answer is B. Robots.txt. It is called comprehensive reading people. The question says which FILE. Robots.txt is a file. Documents Root is a Directory Folder NOT a file. What is a robots txt file used for? A robots. txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with request. It can be used to discover the structure of a website during web-server footprinting.
upvoted 3 times
...
EngnSu
11 months ago
Selected Answer: B
According CEHv11 P.1650 An attacker can simply request the Robots.txt file from the URL and retrieve sensitive information such as the root directory structure and content management system information about the target website
upvoted 1 times
...
Madhusudanan
1 year ago
Selected Answer: C
Answer C: The document root is a directory that is stored on your hosts severs and that is designated for holding web pages.
upvoted 1 times
...
alopezme
1 year, 4 months ago
its robots.txt (underscore)
upvoted 1 times
...
Bot001
1 year, 6 months ago
ANSWRE C. DOCUMENT ROOT
upvoted 1 times
...
brdweek
1 year, 8 months ago
robots.txt
upvoted 3 times
...
ANDRESCB1988
1 year, 9 months ago
option C is the correct, Document root
upvoted 1 times
...
beowolf
1 year, 10 months ago
Robots.txt should be the right answer. Read the question, it says "file"
upvoted 7 times
...
cerzocuspi
2 years ago
Correct answer is Document root: The document root is a directory (a folder) that is stored on your host’s servers and that is designated for holding web pages.
upvoted 4 times
QuidProQuoo
1 year, 11 months ago
Therefor this can not be the correct answer because they are asking for a file.
upvoted 4 times
...
generate159357
1 year, 8 months ago
document root is a directory of website but question asks about a file with structure which is option D index.html
upvoted 2 times
...
...
americaman80
2 years ago
Correct answer is Document root: Explanation: The document root is a directory (a folder) that is stored on your host’s servers and that is designated for holding web pages. When someone else looks at your web site, this is the location they will be accessing. In order for a website to be accessible to visitors, it must be published to the correct directory, the “document root.” You might think that there would only be one directory in your space on your host’s servers, but often hosts provide services beyond just publishing a website. In this case, they are likely to set up every account with several directories, since each service would require its own.
upvoted 2 times
_Storm_
2 years ago
talks about file not directory
upvoted 14 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago