Friday, November 23, 2012

Linux/Unix must known commands

20 Linux System Monitoring Tools Every SysAdmin Should Know

by  on JUNE 27, 2009 · 304 COMMENTS· LAST UPDATED NOVEMBER 6, 2012

Need to monitor Linux server performance? Try these built-in commands and a few add-on tools. Most Linux distributions are equipped with tons of monitoring. These tools provide metrics which can be used to get information about system activities. You can use these tools to find the possible causes of a performance problem. The commands discussed below are some of the most basic commands when it comes to system analysis and debugging server issues such as:

  1. Finding out bottlenecks.
  2. Disk (storage) bottlenecks.
  3. CPU and memory bottlenecks.
  4. Network bottlenecks.


#1: top - Process Activity Command

The top program provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.

Fig.01: Linux top command

Fig.01: Linux top command

Commonly Used Hot Keys

The top command provides several useful hot keys:

Hot KeyUsage
tDisplays summary information off and on.
mDisplays memory information off and on.
ASorts the display by top consumers of various system resources. Useful for quick identification of performance-hungry tasks on a system.
fEnters an interactive configuration screen for top. Helpful for setting up top for a specific task.
oEnables you to interactively select the ordering within top.
rIssues renice command.
kIssues kill command.
zTurn on or off color/mono


=> Related: How do I Find Out Linux CPU Utilization?

#2: vmstat - System Activity, Hardware and System Information

The command vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.
# vmstat 3
Sample Outputs:

  procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------   r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st   0  0      0 2540988 522188 5130400    0    0     2    32    4    2  4  1 96  0  0   1  0      0 2540988 522188 5130400    0    0     0   720 1199  665  1  0 99  0  0   0  0      0 2540956 522188 5130400    0    0     0     0 1151 1569  4  1 95  0  0   0  0      0 2540956 522188 5130500    0    0     0     6 1117  439  1  0 99  0  0   0  0      0 2540940 522188 5130512    0    0     0   536 1189  932  1  0 98  0  0   0  0      0 2538444 522188 5130588    0    0     0     0 1187 1417  4  1 96  0  0   0  0      0 2490060 522188 5130640    0    0     0    18 1253 1123  5  1 94  0  0

Display Memory Utilization Slabinfo

# vmstat -m

Get Information About Active / Inactive Memory Pages

# vmstat -a
=> Related: How do I find out Linux Resource utilization to detect system bottlenecks?

#3: w - Find Out Who Is Logged on And What They Are Doing

w command displays information about the users currently on the machine, and their processes.
# w username
# w vivek

Sample Outputs:

   17:58:47 up 5 days, 20:28,  2 users,  load average: 0.36, 0.26, 0.24  USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT  root     pts/0    10.1.3.145       14:55    5.00s  0.04s  0.02s vim /etc/resolv.conf  root     pts/1    10.1.3.145       17:43    0.00s  0.03s  0.00s w

#4: uptime - Tell How Long The System Has Been Running

The uptime command can be used to see how long the server has been running. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
# uptime
Output:

   18:02:41 up 41 days, 23:42,  1 user,  load average: 0.00, 0.00, 0.00

1 can be considered as optimal load value. The load can change from system to system. For a single CPU system 1 - 3 and SMP systems 6-10 load value might be acceptable.

#5: ps - Displays The Processes

ps command will report a snapshot of the current processes. To select all processes use the -A or -e option:
# ps -A
Sample Outputs:

    PID TTY          TIME CMD      1 ?        00:00:02 init      2 ?        00:00:02 migration/0      3 ?        00:00:01 ksoftirqd/0      4 ?        00:00:00 watchdog/0      5 ?        00:00:00 migration/1      6 ?        00:00:15 ksoftirqd/1  ....  .....   4881 ?        00:53:28 java   4885 tty1     00:00:00 mingetty   4886 tty2     00:00:00 mingetty   4887 tty3     00:00:00 mingetty   4888 tty4     00:00:00 mingetty   4891 tty5     00:00:00 mingetty   4892 tty6     00:00:00 mingetty   4893 ttyS1    00:00:00 agetty  12853 ?        00:00:00 cifsoplockd  12854 ?        00:00:00 cifsdnotifyd  14231 ?        00:10:34 lighttpd  14232 ?        00:00:00 php-cgi  54981 pts/0    00:00:00 vim  55465 ?        00:00:00 php-cgi  55546 ?        00:00:00 bind9-snmp-stat  55704 pts/1    00:00:00 ps

ps is just like top but provides more information.

Show Long Format Output

# ps -Al
To turn on extra full mode (it will show command line arguments passed to process):
# ps -AlF

To See Threads ( LWP and NLWP)

# ps -AlFH

To See Threads After Processes

# ps -AlLm

Print All Process On The Server

# ps ax
# ps axu

Print A Process Tree

# ps -ejH
# ps axjf
# pstree

Print Security Information

# ps -eo euser,ruser,suser,fuser,f,comm,label
# ps axZ
# ps -eM

See Every Process Running As User Vivek

# ps -U vivek -u vivek u

Set Output In a User-Defined Format

# ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
# ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
# ps -eopid,tt,user,fname,tmout,f,wchan

Display Only The Process IDs of Lighttpd

# ps -C lighttpd -o pid=
OR
# pgrep lighttpd
OR
# pgrep -u vivek php-cgi

Display The Name of PID 55977

# ps -p 55977 -o comm=

Find Out The Top 10 Memory Consuming Process

# ps -auxf | sort -nr -k 4 | head -10

Find Out top 10 CPU Consuming Process

# ps -auxf | sort -nr -k 3 | head -10

#6: free - Memory Usage

The command free displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.
# free
Sample Output:

              total       used       free     shared    buffers     cached  Mem:      12302896    9739664    2563232          0     523124    5154740  -/+ buffers/cache:    4061800    8241096  Swap:      1052248          0    1052248

=> Related: :

  1. Linux Find Out Virtual Memory PAGESIZE
  2. Linux Limit CPU Usage Per Process
  3. How much RAM does my Ubuntu / Fedora Linux desktop PC have?

#7: iostat - Average CPU Load, Disk Activity

The command iostat report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS).
# iostat
Sample Outputs:

  Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 	06/26/2009  avg-cpu:  %user   %nice %system %iowait  %steal   %idle             3.50    0.09    0.51    0.03    0.00   95.86  Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn  sda              22.04        31.88       512.03   16193351  260102868  sda1              0.00         0.00         0.00       2166        180  sda2             22.04        31.87       512.03   16189010  260102688  sda3              0.00         0.00         0.00       1615          0

=> Related: : Linux Track NFS Directory / Disk I/O Stats

#8: sar - Collect and Report System Activity

The sar command is used to collect, report, and save system activity information. To see network counter, enter:
# sar -n DEV | more
To display the network counters from the 24th:
# sar -n DEV -f /var/log/sa/sa24 | more
You can also display real time usage using sar:
# sar 4 5
Sample Outputs:

  Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 		06/26/2009  06:45:12 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle  06:45:16 PM       all      2.00      0.00      0.22      0.00      0.00     97.78  06:45:20 PM       all      2.07      0.00      0.38      0.03      0.00     97.52  06:45:24 PM       all      0.94      0.00      0.28      0.00      0.00     98.78  06:45:28 PM       all      1.56      0.00      0.22      0.00      0.00     98.22  06:45:32 PM       all      3.53      0.00      0.25      0.03      0.00     96.19  Average:          all      2.02      0.00      0.27      0.01      0.00     97.70

=> Related: : How to collect Linux system utilization data into a file

#9: mpstat - Multiprocessor Usage

The mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:
# mpstat -P ALL
Sample Output:

  Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in)	 	06/26/2009  06:48:11 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   %idle    intr/s  06:48:11 PM  all    3.50    0.09    0.34    0.03    0.01    0.17    0.00   95.86   1218.04  06:48:11 PM    0    3.44    0.08    0.31    0.02    0.00    0.12    0.00   96.04   1000.31  06:48:11 PM    1    3.10    0.08    0.32    0.09    0.02    0.11    0.00   96.28     34.93  06:48:11 PM    2    4.16    0.11    0.36    0.02    0.00    0.11    0.00   95.25      0.00  06:48:11 PM    3    3.77    0.11    0.38    0.03    0.01    0.24    0.00   95.46     44.80  06:48:11 PM    4    2.96    0.07    0.29    0.04    0.02    0.10    0.00   96.52     25.91  06:48:11 PM    5    3.26    0.08    0.28    0.03    0.01    0.10    0.00   96.23     14.98  06:48:11 PM    6    4.00    0.10    0.34    0.01    0.00    0.13    0.00   95.42      3.75  06:48:11 PM    7    3.30    0.11    0.39    0.03    0.01    0.46    0.00   95.69     76.89

=> Related: : Linux display each multiple SMP CPU processors utilization individually.

#10: pmap - Process Memory Usage

The command pmap report memory map of a process. Use this command to find out causes of memory bottlenecks.
# pmap -d PID
To display process memory information for pid # 47394, enter:
# pmap -d 47394
Sample Outputs:

  47394:   /usr/bin/php-cgi  Address           Kbytes Mode  Offset           Device    Mapping  0000000000400000    2584 r-x-- 0000000000000000 008:00002 php-cgi  0000000000886000     140 rw--- 0000000000286000 008:00002 php-cgi  00000000008a9000      52 rw--- 00000000008a9000 000:00000   [ anon ]  0000000000aa8000      76 rw--- 00000000002a8000 008:00002 php-cgi  000000000f678000    1980 rw--- 000000000f678000 000:00000   [ anon ]  000000314a600000     112 r-x-- 0000000000000000 008:00002 ld-2.5.so  000000314a81b000       4 r---- 000000000001b000 008:00002 ld-2.5.so  000000314a81c000       4 rw--- 000000000001c000 008:00002 ld-2.5.so  000000314aa00000    1328 r-x-- 0000000000000000 008:00002 libc-2.5.so  000000314ab4c000    2048 ----- 000000000014c000 008:00002 libc-2.5.so  .....  ......  ..  00002af8d48fd000       4 rw--- 0000000000006000 008:00002 xsl.so  00002af8d490c000      40 r-x-- 0000000000000000 008:00002 libnss_files-2.5.so  00002af8d4916000    2044 ----- 000000000000a000 008:00002 libnss_files-2.5.so  00002af8d4b15000       4 r---- 0000000000009000 008:00002 libnss_files-2.5.so  00002af8d4b16000       4 rw--- 000000000000a000 008:00002 libnss_files-2.5.so  00002af8d4b17000  768000 rw-s- 0000000000000000 000:00009 zero (deleted)  00007fffc95fe000      84 rw--- 00007ffffffea000 000:00000   [ stack ]  ffffffffff600000    8192 ----- 0000000000000000 000:00000   [ anon ]  mapped: 933712K    writeable/private: 4304K    shared: 768000K

The last line is very important:

  • mapped: 933712K total amount of memory mapped to files
  • writeable/private: 4304K the amount of private address space
  • shared: 768000K the amount of address space this process is sharing with others

=> Related: : Linux find the memory used by a program / process using pmap command

#11 and #12: netstat and ss - Network Statistics

The command netstat displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. ss command is used to dump socket statistics. It allows showing information similar to netstat. See the following resources about ss and netstat commands:

#13: iptraf - Real-time Network Statistics

The iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others. It can provide the following info in easy to read format:

  • Network traffic statistics by TCP connection
  • IP traffic statistics by network interface
  • Network traffic statistics by protocol
  • Network traffic statistics by TCP/UDP port and by packet size
  • Network traffic statistics by Layer2 address
Fig.02: General interface statistics: IP traffic statistics by network interface

Fig.02: General interface statistics: IP traffic statistics by network interface

Fig.03 Network traffic statistics by TCP connection

Fig.03 Network traffic statistics by TCP connection

#14: tcpdump - Detailed Network Traffic Analysis

The tcpdump is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS, enter:
# tcpdump -i eth1 'udp port 53'
To display all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets, enter:
# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
To display all FTP session to 202.54.1.5, enter:
# tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20'
To display all HTTP session to 192.168.1.5:
# tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http'
Use wireshark to view detailed information about files, enter:
# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80

#15: strace - System Calls

Trace system calls and signals. This is useful for debugging webserver and other server problems. See how to use to trace the process and see What it is doing.

#16: /Proc file system - Various Kernel Statistics

/proc file system provides detailed information about various hardware devices and other Linux kernel information. See Linux kernel /proc documentations for further details. Common /proc examples:
# cat /proc/cpuinfo
# cat /proc/meminfo
# cat /proc/zoneinfo
# cat /proc/mounts

17#: Nagios - Server And Network Monitoring

Nagios is a popular open source computer system and network monitoring application software. You can easily monitor all your hosts, network equipment and services. It can send alert when things go wrong and again when they get better. FAN is "Fully Automated Nagios". FAN goals are to provide a Nagios installation including most tools provided by the Nagios Community. FAN provides a CDRom image in the standard ISO format, making it easy to easilly install a Nagios server. Added to this, a wide bunch of tools are including to the distribution, in order to improve the user experience around Nagios.

18#: Cacti - Web-based Monitoring Tool

Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices. It can provide data about network, CPU, memory, logged in users, Apache, DNS servers and much more. See how to install and configure Cacti network graphing tool under CentOS / RHEL.

#19: KDE System Guard - Real-time Systems Reporting and Graphing

KSysguard is a network enabled task and system monitor application for KDE desktop. This tool can be run over ssh session. It provides lots of features such as a client/server architecture that enables monitoring of local and remote hosts. The graphical front end uses so-called sensors to retrieve the information it displays. A sensor can return simple values or more complex information like tables. For each type of information, one or more displays are provided. Displays are organized in worksheets that can be saved and loaded independently from each other. So, KSysguard is not only a simple task manager but also a very powerful tool to control large server farms.

Fig.05 KDE System Guard

Fig.05 KDE System Guard {Image credit: Wikipedia}

See the KSysguard handbook for detailed usage.

#20: Gnome System Monitor - Real-time Systems Reporting and Graphing

The System Monitor application enables you to display basic system information and monitor system processes, usage of system resources, and file systems. You can also use System Monitor to modify the behavior of your system. Although not as powerful as the KDE System Guard, it provides the basic information which may be useful for new users:

  • Displays various basic information about the computer's hardware and software.
  • Linux Kernel version
  • GNOME version
  • Hardware
  • Installed memory
  • Processors and speeds
  • System Status
  • Currently available disk space
  • Processes
  • Memory and swap space
  • Network usage
  • File Systems
  • Lists all mounted filesystems along with basic information about each.
Fig.06 The Gnome System Monitor application

Fig.06 The Gnome System Monitor application

Bonus: Additional Tools

A few more tools:

  • nmap - scan your server for open ports.
  • lsof - list open files, network connections and much more.
  • ntop web based tool - ntop is the best tool to see network usage in a way similar to what top command does for processes i.e. it is network traffic monitoring software. You can see network status, protocol wise distribution of traffic for UDP, TCP, DNS, HTTP and other protocols.
  • Conky - Another good monitoring tool for the X Window System. It is highly configurable and is able to monitor many system variables including the status of the CPU, memory, swap space, disk storage, temperatures, processes, network interfaces, battery power, system messages, e-mail inboxes etc.
  • GKrellM - It can be used to monitor the status of CPUs, main memory, hard disks, network interfaces, local and remote mailboxes, and many other things.
  • vnstat - vnStat is a console-based network traffic monitor. It keeps a log of hourly, daily and monthly network traffic for the selected interface(s).
  • htop - htop is an enhanced version of top, the interactive process viewer, which can display the list of processes in a tree form.
  • mtr - mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.

Did I miss something? Please add your favorite system motoring tool in the comments.

Wednesday, November 21, 2012

Building your own Sublime out of free components with vim

Building your own Sublime out of free components with vim

I recently wrote about Sublime Text, and what a nice text editor it is. It really is a very nice editor and I don't want to rain on its parade. I bought a license even though I never use proprietary software for work, haven't used any for over 10 years now. That's how good I think Sublime is.

But if you're comfortable with an editor like vim, you can make vim feel almost like Sublime, using only free and open source software (FOSS). vim (and emacs) have had many of the features that Sublime has, in some cases for decades. Here's a very small and simple guide for making vim look and behave a little like Sublime.

First, install the required components into your .vim directory:

  1. vim pathogen to allow you to easily add and remove vim plugins. Somewhat like Sublime's package installer, only with many more packages available since vim has a 30+ year history.
  2. NERDtree gives you a separate directory tree to browse and open files from, like the left-hand side tree in Sublime.
  3. If you're a Rubyist, install vim-ruby: git clone git://github.com/vim-ruby/vim-ruby.git ~/.vim/bundle/vim-ruby
  4. A terribly nice 256 color color scheme (works in the terminal and in the vim GUI version): xoria256. Make sure your terminal supports 256 colors, set your TERM variable to something like xterm-256color. If you use a terminal multiplexer like screen or tmux, set it to screen-256color to make sure your background colors work properly.
  5. To replicate the Control-P/Command-P (Go To File) behavior found in Sublime, you can use vim's Command-T plugin. Thanks for the hint, Stefan! Another similar plugin, written entirely in vimscript so it doesn't need Ruby, is ctrlp.vim.
  6. Set up your vimrc to load most of this stuff. See below for mine.

Example .vimrc:

call pathogen#infect()
syntax on
filetype plugin indent on
:set number
colorscheme xoria256

Now you can start vim and execute :NERDtree to get a nice tree on the left-hand side, then open a file and perhaps split your window (Ctrl-W n) and load each of the files in separate split frames. Navigate back and forth between frames using Ctrl-w Ctrl-w (or Ctrl-w followed by the arrow key of the direction you want to move in). You can also split each of the frames into tabs, just like in Sublime.

And to finish off, one of the features I use most frequently in Sublime is the "find inside files in a folder" search (Ctrl-Shift-F). In vim, you can accomplish the same using e.g. vimgrep. First, grep for something in the files you want: :vimgrep meta_datum **/*.rb. Then bring up the quicklist to see your results: :cw. This way, you can navigate almost in the same way as in Sublime.

Two screenshots of all this combined below. Now go on and customize vim to fit your needs exactly!

This entry was posted in Technology and tagged  by Ramón Cahenzli. Bookmark thepermalink.

2 THOUGHTS ON "BUILDING YOUR OWN SUBLIME OUT OF FREE COMPONENTS WITH VIM"

  1. Stefan on July 19, 2012 at 09:10 said:

    Sublime *is* the second best editor ever! The feature I like most about Sublime is the quickly-find-file-in-project-or-folder command "ctrl-p". There's the Command-T plugin for vim that does about the same: https://github.com/wincent/Command-T

  2. Joon Ki on August 2, 2012 at 15:01 said:

    and xoria256 is the second best colorscheme :) , i prefer solarizedhttp://ethanschoonover.com/solarized
    https://github.com/altercation/solarized
    beautiful theme for terminal and graphical vim.

    An almost complete bunch of Vim-Plugins preconfigured:
    https://github.com/akitaonrails/vimfiles

    Works like a charm.

Tuesday, September 25, 2012

Essential Sublime Text 2 Plugins and Extensions

Essential Sublime Text 2 Plugins and Extensions


Essential Sublime Text 2 Plugins and Extensions

\Rating:

Essential Sublime Text 2 Plugins and Extensions

Tutorial Details

Sublime Text 2 is a relatively new code editor that I've been trying out for a while now. While it's still in public beta, it already offers a great mix of features and performance that has convinced me to switch from my trusted Komodo.

While I really do love the features available out of the box, as with most things in life, there is always room for more. With Sublime Text 2 being as extensible as it is, a big ecosystem has sprouted around it, catering to most of your web development needs, be they actually useful or catering to your whimsy. To that effect, today I'd like to share some of the plugins and extensions that I've found quite useful. While not all of them may appeal to you, I'm sure you'll a find a gem or two that will absolutely ease your workflow!


Zen Coding

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

Zen Coding is an editor plugin for high-speed HTML coding and editing. The core of this plugin is a powerful abbreviation engine which allows you to expand expressions—similar to CSS selectors—into HTML code.


JQuery Package for Sublime Text

And where will all us be without jQuery? This is a Sublime Text bundle to help with jQuery functions.


Sublime Prefixr

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

A plugin that runs CSS through the Prefixr API, written by our very own Jeffrey Way, for Sublime Text 2.


JS Format

JsFormat is a javascipt formatting plugin for Sublime Text 2. It uses the commandline/python-module javascript formatter from JS Beautifier to format the selected text, or the entire file if there is no selection.


SublimeLinter

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

SublimeLinter is a plugin that supports "lint" programs (known as "linters"). SublimeLinter highlights lines of code the linter deems to contain (potential) errors. It also supports highlighting special annotations so that they can be quickly located.


Placeholders

I always find inserting placeholder, or filler, content to be a quite tedious affair. With this plugin, you can insert placeholder content and HTML in a cinch!


Sublime Alignment

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

I'm quite a stickler for properly formatted code. One thing to get right is lining up all those darn variable assignment so they look all organized and neat. With this plugin, all it takes is the press of key. A simple key-binding allows you align multi-line and multiple selections.


Clipboard History

Tired of having to swap out your clipboard's contents during a marathon hackathon? Keep a history of your clipboard items with this plugin and paste away as needed.


SublimeREPL

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

SublimeREPL lets you run your favorite interpreter inside a Sublime buffer. Languages supported include Python and Ruby.


DetectSyntax

DetectSyntax is a plugin for Sublime Text 2 that allows you to detect the syntax of files that might not otherwise be detected properly. This is specially helpful when you run into custom file formats — files used in templating is an excellent example.


Nettuts Fetch

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

This plugin automatically pulls in the latest copy of a file, simply by typing a keyboard shortcut. It'll perform a curl request to your specified URL and allow you to rest assured that, for all new projects, you're using the latest copy of a particular asset.


JsMinifier

It's a good practice to always minify your files during deploying to a production server. And this plugin will swiftly automate the process by minifying your JavaScript using the Google Closure compiler.


Sublime CodeIntel

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

SublimeCodeIntel is a code intelligence plugin ported from Open Komodo Editor to Sublime Text 2. It shows autocomplete information with the available modules in real time as well as display information about the current function in the status bar. Nifty!


Tag

This is a great plugin when you're working with a lot of markup. Tag is a collection of packages about, predictably, tags, mixed together in an effort to provide a single package with utilities to work with tags. Close a tag on a slash and tag indenting? Sign me up!


Bracket Highlighter

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

This plugin collection includes plugins to fold your code according to brackts, cycle through selecting tags and many more.


Case Conversion

Have a messy co-worker who completely ignores naming conventions? This plugin should save you a good chunk of time. Case conversions converts the current word between three of the most commonly used conventions.


Stackoverflow Search

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

StackOverflow is an absolute life saver — I can't count the sheer number of times it has saved my skin. This plugin lets you do a search on SO directly from your editor.


Sublime Guard

Remember Jeffrey using a gem called Guard in his super useful Rails tutorial? Well, this plugin provides a seamless interface for controlling Guard and viewing Guard output within Sublime Text 2.


Git

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

A nifty little plugin that integrates Git and Sublime Text and implements most of the commands that you'd use in real life. Diff viewing inside ST2 is a great time saver!


Sublime Change Quotes

This is one for the OCD among us. This plugin converts single to double or double to single quotes whilst attempting to preserve correct escaping.


Hex to HSL

Nettuts+ -- Essential Sublime Text 2 Plugins and Extensions

Tired of constantly having to manually convert your colors' hexcodes to HSL? This plugin will automatically do it for you with the press of a button. Well, ok, three buttons. [Shift+Ctrl+U]


Source: http://net.tutsplus.com/tutorials/tools-and-tips/essential-sublime-text-2-plugins-and-extensions/

Ignoring files in git repositories

Ignoring files in git repositories


Ignoring files in git repositories

According to the man page, there are three ways to exclude files from being tracked by git.

Shared list of files to ignore

The most well-known way of preventing files from being part of a git branch is to add such files in .gitignore. (This is analogous to CVS' .cvsignore files.)

Here's an example:
*.generated.html
/config.php
The above ignore list will prevent automatically generated HTML files from being committed by mistake to the repository. Because this is useful to all developers on the project, .gitignore is a good place for this.

The next line prevents the local configuration file from being tracked by git, something else that all developers will want to have.

One thing to note here is the use of a leading slash character with config.php. This is to specifically match the config file in the same directory as the .gitignore file (in this case, the root directory of the repository) but no other. Without this slash, the following files would also be ignored by git:
/app/config.php
/plugins/address/config.php
/module/config.php

Local list (specific to one project)

For those custom files that you don't want version controlled but that others probably don't have or don't want to automatically ignore, git provides a second facility: .git/info/exclude

It works the same way as .gitignore but be aware that this list is only stored locally and only applies to the repository in which it lives.

(I can't think of a good example for when you'd want to use this one because I don't really use it. Feel free to leave a comment if you do use it though, I'm curious to know what others do with it.)

Local list (common to all projects)

Should you wish to automatically ignore file patterns in all of your projects, you will need to use the third gitignore method: core.excludesfile

Put this line in your ~/.gitconfig:
[core]
excludesfile = /home/username/.gitexcludes
(you need to put the absolute path to your home directory, ~/ will not work here unless you use git 1.6.6 or later)

and then put the patterns to ignore in ~/.gitexcludes. For example, this will ignore the automatic backups made by emacs when you save a file:
*~
This is the ideal place to put anything that is generated by your development tools and that doesn't need to appear in your project repositories.

Source: http://feeding.cloud.geek.nz/2009/12/ignoring-files-in-git-repositories.html

git ignoring files

ignoring files


ignoring files

committed 19 Jan 2009

We don't need Git to version everything in our projects, be it compiled source, files with passwords, or temporary files that editors love to create. Usually keeping stuff out of your VCS' hands is a task that is hard to manage and annoying to set up. Not with Git! Using the .gitignore file along with some other options, we're going to learn how to set up per-project and per-user ignores.

The easiest and simplest way is to create a .gitignore file in your project's root directory. The files you choose to ignore here take affect for all directories in your project, unless if they include their own .gitignore file. This is nice since you have one place to configure ignores unlike SVN's svn:ignore which must be set on every folder. Also, the file itself can be versioned, which is definitely good.

Here's a basic .gitignore:

$ cat .gitignore    # Can ignore specific files  .DS_Store    # Use wildcards as well  *~  *.swp    # Can also ignore all directories and files in a directory.  tmp/**/*   

Of course, this could get a lot more complex. You can also add exceptions to ignore rules by starting the line with !. See an example of this at the GitHub guide on ignores.

Two things to keep in mind with ignoring files: First, if a file is already being tracked by Git, adding the file to .gitignore won't stop Git from tracking it. You'll need to do git rm --cached <file> to keep the file in your tree and then ignore it. Secondly, empty directories do not get tracked by Git. If you want them to be tracked, they need to have something in them. Usually doing a touch .gitignore is enough to keep the folder tracked.

You can also open up $GIT_DIR/info/exclude ($GIT_DIR is usually your .git folder) and edit that file for project-only ignores. The problem with this is that those changes aren't checked in, so use this only if you have some personal files that don't need to be shared with others on the same project.

Your final option with ignoring folders is adding a per-user ignore by setting up a core.excludesfiles option in your config file. You can set up a .gitignore file in your HOME directory that will affect all of your repositories by running this command:

git config --global core.excludesfile ~/.gitignore

Read up on the manpage if you'd like to learn more about how ignores work. As always, if you have other ignore-related tips let us know in the comments.


Source: http://gitready.com/beginner/2009/01/19/ignoring-files.html

Sunday, September 9, 2012

Creating a Shared Repository; Users Sharing The Repository

Creating a Shared Repository; Users Sharing The Repository


Commands discussed in this section:

  • git init –bare
  • git clone
  • git remote
  • git pull
  • git push

Scenario: Example Remote Repository

Let's set up our own little "remote" repository and then share it. (The repository will be "remote" to the users sharing it.)

In these examples, the other users sharing the repository will not be very remote since the repository will be on the same disk as the users' home directories. But the git workflow and commands are identical, whether the users and repositories are just a few millimeters away on the same disk, or on a remote network across the world.

Creating The Shared Repository

We'll have the repository created by the user gitadmin. The gitadmin's repository will be be the repository where everybody on the project both publishes their work and also retrieves the latest work done by others.

The scenario:

  • gitadmin will create a repository.
  • Other users, like Amy and Zack will then get ("git clone") copies of gitadmin's remote repository.
  • Changes will be pulled and pushed to and from gitadmin's repository.

Create Shared Repositories "Bare"

If you are creating a git repository for only your own use on projects or days when you just don't feel like sharing, you type:

gitadmin$ git init project1  Initialized empty Git repository in /home/gitadmin/project1/.git/  

However, if you are creating a git repository for sharing with git clone/pull/fetch/push, Use the –bare option to git init:

gitadmin$ git init --bare project1.git  Initialized empty Git repository in /home/gitadmin/project1.git/  

If you want to know why, see Shared Repositories Should Be Bare Repositories.

Bare Repositories End in ".git"

You might have noticed the –bare repository created above ended in .git. By convention, bare git repositories should end in .git. For example, project1.git or usplash.git, etc. The .git ending of a directory signals to others that the git repository is bare.

Amy is ready to add to the remote repository

In our example, since Amy's name begins with the first letter of the alphabet, she gets to work on the repository first.

Amy clones it:

amy$ git clone file:///home/gitadmin/project1.git  Initialized empty Git repository in /home/amy/project1/.git/  warning: You appear to have cloned an empty repository.  

Git just told us the repository that Amy just cloned is empty.

We can now start creating files and publishing ("git push") them to the shared repository.

Amy wants to see if there are any branches in the repository she just retrieved/cloned:

amy$ cd project1  amy$ git branch  

The empty output from git branch command showed are no branches in the new repository.

Amy creates her first file and commit's the new file to the repository.

amy$ echo The beginnings of project1 > amy.file  amy$ git add .  amy$ git commit -m"Amy's initial commit"  [master (root-commit) 01d7520] Amy's initial commit   1 files changed, 1 insertions(+), 0 deletions(-)   create mode 100644 amy.file  amy$ git branch  * master  

The cloned, bare repository didn't have any branches, not even the master repository. When Amy did the first git commit, the master branch was created in Amy's local repository.

Amy tries to publish her local repository to the remote repository:

amy$ git push  No refs in common and none specified; doing nothing.  Perhaps you should specify a branch such as 'master'.  fatal: The remote end hung up unexpectedly  error: failed to push some refs to 'file:///home/gitadmin/project1.git'  

Oops, that didn't work. The above happens on brand new, completely empty, branchless repositories (immediately after doing the git init –bare …).

Amy's local repository created the master branch, but the shared repository that gitadmin created does not have any branches on it still.

Amy will take git's advice and tell git the name of the branch she wants pushed to which remote repository. She must specify both the remote repository name and branch name.

What are the branch and repository names? Amy has been distracted lately and forgot the name of remote repository, so she'll use the git remote command to list the names of her remote repositories:

amy$ git remote  origin  

She is shown there is only one remote repository named origin. The default remote repository when you git clone a repository is named origin, so the above output isn't surprising.

Similarly, Amy can find out the branch name in her local repository by using the git branch command:

amy$ git branch  * master  

The branch name master isn't surprising either, since master is the default branch name for git.

Armed with the remote repository name (origin) and local branch name (master) Amy can now push (publish) the changes.

The git push syntax is:
git push [remote-repository-name] [branch-or-commit-name].
Amy will push the branch named master to the remote repository named origin:

amy$ git push origin master  Counting objects: 3, done.  Writing objects: 100% (3/3), 245 bytes, done.  Total 3 (delta 0), reused 0 (delta 0)  Unpacking objects: 100% (3/3), done.  To file:///home/gitadmin/project1.git   * [new branch]      master -> master  

The last line above reports a new branch was created: the master branch (referred to in some places as the "source") on the local repository was mapped to the master branch (referred to in some places as the "destination") on the remote repository.

Amy will no longer need to type git push origin master, but will be able to type git push, since the master branch now exists on the remote repository named origin:

amy$ git push  Everything up-to-date  

Zack wants to play too

Now it's Zack's turn to play with the repository. He clones it:

zack$ git clone file:///home/gitadmin/project1.git  Initialized empty Git repository in /home/zack/project1/.git/  remote: Counting objects: 3, done.  remote: Total 3 (delta 0), reused 0 (delta 0)  Receiving objects: 100% (3/3), done.  zack$ ls  amy.file  

Above, the file Amy added, amy.file is copied from the shared repository to Zack's working directory.

Zack adds a file and pushes it up to the shared repository:

zack$ cd project1  zack$ echo I am zack > zack.file  zack$ git add .  zack$ git commit -m 'zack initial commit'  [master 05affb3] zack initial commit   1 files changed, 1 insertions(+), 0 deletions(-)   create mode 100644 zack.file  zack$ git push  Counting objects: 4, done.  Delta compression using up to 2 threads.  Compressing objects: 100% (2/2), done.  Writing objects: 100% (3/3), 283 bytes, done.  Total 3 (delta 0), reused 0 (delta 0)  Unpacking objects: 100% (3/3), done.  To file:///home/gitadmin/project1.git     01d7520..05affb3  master -> master  

Note that Zack didn't have to do the git push origin master to create the master branch on the remote repository, since Amy had already created the master branch on the remote repository.

Amy wants to get the latest

amy$ git pull  remote: Counting objects: 4, done.  remote: Compressing objects: 100% (2/2), done.  remote: Total 3 (delta 0), reused 0 (delta 0)  Unpacking objects: 100% (3/3), done.  From file:///home/gitadmin/project1     01d7520..05affb3  master     -> origin/master  Updating 01d7520..05affb3  Fast-forward   zack.file |    1 +   1 files changed, 1 insertions(+), 0 deletions(-)   create mode 100644 zack.file  amy$ ls  amy.file  zack.file  

Things are working pretty well: Amy and Zack are sharing nicely: They are contributing to ("git push") and receiving from ("git pull") the shared repository.

The above summarizes how to get moving with shared, remote repostitories. But there's a lot more fun you can have with remote repositories.


Source: http://www.gitguys.com/topics/creating-a-shared-repository-users-sharing-the-repository/