Monday, March 17, 2008

Yum|yum command: Update / Install Packages

yum command: Update / Install Packages under Redhat Enterprise / CentOS Linux Version 5.x

Task: Register my system with RHN

To register your system with RHN type the following command and just follow on screen instructions (CentOS user skip to next step):
# rhn_register

WARNING! These examples only works with RHEL / CentOS Linux version 5.x or above. For RHEL 4.x and older version use up2date command.

Task: Display list of updated software (security fix)

Type the following command at shell prompt:
# yum list updates

Task: Patch up system by applying all updates

To download and install all updates type the following command:
# yum update

Task: List all installed packages

List all installed packages, enter:
# rpm -qa
# yum list installed

Find out if httpd package installed or not, enter:
# rpm -qa | grep httpd*
# yum list installed httpd

Task: Check for and update specified packages

# yum update {package-name-1}
To check for and update httpd package, enter:
# yum update httpd

Task: Search for packages by name

Search httpd and all matching perl packages, enter:
# yum list {package-name}
# yum list {regex}
# yum list httpd
# yum list perl*

Sample output:

Loading "installonlyn" plugin Loading "security" plugin Setting up repositories Reading repository metadata in from local files Installed Packages perl.i386 4:5.8.8-10.el5_0.2 installed perl-Archive-Tar.noarch 1.30-1.fc6 installed perl-BSD-Resource.i386 1.28-1.fc6.1 installed perl-Compress-Zlib.i386 1.42-1.fc6 installed perl-DBD-MySQL.i386 3.0007-1.fc6 installed perl-DBI.i386 1.52-1.fc6 installed perl-Digest-HMAC.noarch 1.01-15 installed perl-Digest-SHA1.i386 2.11-1.2.1 installed perl-HTML-Parser.i386 3.55-1.fc6 installed ..... ....... .. perl-libxml-perl.noarch 0.08-1.2.1 base perl-suidperl.i386 4:5.8.8-10.el5_0.2 updates 

Task: Install the specified packages [ RPM(s) ]

Install package called httpd:
# yum install {package-name-1} {package-name-2}
# yum install httpd

Task: Remove / Uninstall the specified packages [ RPM(s) ]

Remove package called httpd, enter:
# yum remove {package-name-1} {package-name-2}
# yum remove httpd

Task: Display the list of available packages

# yum list all

Task: Display list of group software

Type the following command:
# yum grouplist
Output:

Installed Groups:  Engineering and Scientific  MySQL Database  Editors  System Tools  Text-based Internet  Legacy Network Server  DNS Name Server  Dialup Networking Support  FTP Server  Network Servers  Legacy Software Development  Legacy Software Support  Development Libraries  Graphics  Web Server  Ruby  Printing Support  Mail Server  Server Configuration Tools  PostgreSQL Database Available Groups:  Office/Productivity  Administration Tools  Beagle  Development Tools  GNOME Software Development  X Software Development  Virtualization  GNOME Desktop Environment  Authoring and Publishing  Mono  Games and Entertainment  XFCE-4.4  Tomboy  Java  Java Development  Emacs  X Window System  Windows File Server  KDE Software Development  KDE (K Desktop Environment)  Horde  Sound and Video  FreeNX and NX  News Server  Yum Utilities  Graphical Internet Done 

Task: Install all the default packages by group

Install all 'Development Tools' group packages, enter:
# yum groupinstall "Development Tools"

Task: Update all the default packages by group

Update all 'Development Tools' group packages, enter:
# yum groupupdate "Development Tools"

Task: Remove all packages in a group

Remove all 'Development Tools' group packages, enter:
# yum groupremove "Development Tools"

Task: Install particular architecture package

If you are using 64 bit RHEL version it is possible to install 32 packages:
# yum install {package-name}.{architecture}
# yum install mysql.i386

Task: Display packages not installed via official RHN subscribed repos

Show all packages not available via subscribed channels or repositories i.e show packages installed via other repos:
# yum list extras
Sample output:

Loading "installonlyn" plugin Loading "security" plugin Setting up repositories Reading repository metadata in from local files Extra Packages DenyHosts.noarch 2.6-python2.4 installed VMwareTools.i386 6532-44356 installed john.i386 1.7.0.2-3.el5.rf installed kernel.i686 2.6.18-8.1.15.el5 installed kernel-devel.i686 2.6.18-8.1.15.el5 installed lighttpd.i386 1.4.18-1.el5.rf installed lighttpd-fastcgi.i386 1.4.18-1.el5.rf installed psad.i386 2.1-1 installed rssh.i386 2.3.2-1.2.el5.rf installed 

Task: Display what package provides the file

You can easily find out what RPM package provides the file. For example find out what provides the /etc/passwd file:
# yum whatprovides /etc/passwd
Sample output:

Loading "installonlyn" plugin Loading "security" plugin Setting up repositories Reading repository metadata in from local files  setup.noarch 2.5.58-1.el5 base Matched from: /etc/passwd  setup.noarch 2.5.58-1.el5 installed Matched from: /etc/passwd 

You can use same command to list packages that satisfy dependencies:
# yum whatprovides {dependency-1} {dependency-2}
Refer yum command man page for more information:
# man yum

Friday, March 14, 2008

SOA|SOAP Web Services : awkward. Python and Web Services : painful

SOAP Web Services : awkward. Python and Web Services : painful. - Nicolas Lehuen's Weblog

SOAP Web Services : awkward. Python and Web Services : painful.

By Nico on Tuesday, May 30 2006, 21:47 - General - Permalink

So, I had managed to dodge it until now, but that's it, we're in 2006 and I need to use SOAP web services. Back in 2002 I was implementing and consuming REST+XML or XML-RPC web services all over the place. At the time, SOAP smelled like... well, let's says it definitely didn't smell like soap. Today, I'm not a slightiest bit more convinced, but when you gotta do it, you gotta do it.

So yes, I used Java and the AXIS web services library to consume web services, namely the Google Adwords API and the Amazon Electronic Commerce Services. Of course, Java isn't a dynamic programming language, so you have to generate a bunch of code using the WSDL2Java compiler, but it's not that difficult. You then get a true Java API that smells like... well, it doesn't smell very soapy either.

I mean, I've already given a sample of the need to instantiate a factory to get a locator service that will build a service implementation that you need to configure. It isn't pretty. But that's only the initialization part. Then, you use an API that has been obviously generated by a computer and doesn't feel user friendly at all. It's a bit like a DOM API from another dimension. Well, let me pat myself on the back for this cunning analogy, because it is in fact exactly that : the AXIS toolkit did its best to build wrappers around the structure of request and response documents. XML documents, mind you, and I won't even add insult to injury by reminding my wide audience that those structures are specified in the Ruth Goldberg XML Schema format.

Well, give me embedded XML litterals like in E4X or C# 3.0, and native XPath support like in... well LINQ (also in C# 3.0, there's a thing going on here), and you've got a proper toolkit to consume web services. But here, AXIS generates a bunch of wrappers that are quite awkward to use as soon as the document structure is a tad complicated. Pretty soon you wish you could stop using those wrappers and access the raw XML nodes (preferably using the XOM API, but that's another story). I'm sure there's an option somewhere that allows you to do that, but I could not find it, and given what I saw, I expect the result to be quite messy.

Anyway, while struggling with all those bells and whistles, I told myself "Hey, all this mess is caused by the static typing of Java, why not try to do it in Python ?". Why not, indeed ? Well, because if using SOAP Web Services in Java is awkward, in Python, it's actually painful, because It Just Won't Work™.

Oh, I know the documentation are wonderful, and dynamic typing allows for wonderful proxy thingies that all work my magic... CORBA is just sweet in Python (see omniORB), because you don't have to worry about code generators and compilers and so on. You give your ORB an IDL file and bam, it works. Or not. But that's another story.

Well, it's not, actually, because SOAP Web Services are actually a stripped down, crappy version of CORBA, WSDL being a tricky, complicated version of IDL. So you'd expect the experience to be similar, and it is : you give your SOAP library a WSDL and bam, it works.

Except it doesn't. Because in the Python world, XML libraries in general and Web Services in particular are either half implemented or not unmaintained since the heidy days of 2000-2002. One of the brand new things in Python 2.5 is the inclusion of ElementTree, a replacement Python-friendly DOM API for the pile of non-standard crap that we have right now. Isn't it about time ?

For web services, the situation is much worse, and the sad truth is that you can use ZSI or SOAPy to consume web services, but you've got a very high chance that it won't be compatible with obscure web services like the one proposed by Google, Amazon or Yahoo. Thanks god the two later also provide REST APIs... But right now it's the Google APIs that interest me. And Google said "thou shall use SOAP, SOAP thou will use". And the Python SOAP APIs don't understand. We might as well use CORBA, for that kind of "interesting" incompatibilities.

raimondas says "Python Web Services - Not Quite Painless", but I feel that for the sake of precision, we should simply say "Python Web Services - Actually Quite Painful".

The good thing is, like for all painful things, it's a relief when it stops. I really didn't expect to enjoy using Java and AXIS for this. Oh, well...

They posted on the same topic

Wednesday, March 12, 2008

Network|ADSL桥接模式和路由模式的区别

ADSL桥接模式和路由模式的区别_Cisco与华为技术网(Vlan9.com)

ADSL桥接模式和路由模式的区别


来源: 作者: 出处:Vlan9.com 2007-10-02 进入论坛

    ADSL宽带接入方式在使用的过程中常常会遇到ADSL"桥接模式"和"路由模式"问题的困扰。

本文尝试就国内的ADSL接入的常见ADSL接入方式(模式)的作一个介绍,重点阐述ADSL ROUTER(ADSL路由器或称"带路由的ADSL MODEM")的"桥接模式(RFC1483 Bridged)"和"路由模式(PPPoE)"的区别。

    桥接模式与路由模式

    早期国内的ADSL线路接入都是桥接方式,由ADSL MODEM和电脑配合,在电脑上分配固定IP地址,开机就能接入局端设备进入互联网。但是这样在用户不开机上网时,IP是不会被利用,会造成目前日益缺少的公网IP资源的浪费,因此出现了PPPoE拨号的ADSL接入。

    PPPoE拨号可以使用户开机时拨号接入局端设备,由局端设备分配给一个动态公网IP,这样公网IP紧张的局面就得到了缓解。目前国内的ADSL上网方式中,基本上是PPPoE拨号的方式。PPPoE拨号出现以后,ADSL的接入设备――ADSL MODEM(ADSL调制解调器)就有一个新的兄弟产品,叫做ADSL ROUTER(ADSL路由器)。

    这种设备具有ADSL MODEM的最基本的桥接功能,所以个别产品也叫ADSL BRIDGE/ROUTER(ADSL桥接路由器),俗称为"带路由的ADSL MODEM"。ADSL ROUTER 具有自带的PPPoE拨号软件,并能提供DHCP服务,RIP-1路由等功能,因此它被移植了少量的路由器的功能。

    但是,并不是说PPPoE拨号就没有桥接,常见的这类组网有如:ADSL MODEM + PPPoE拨号软件(如EnterNet 300)。有个别地方的电信营运商仍主推一般的ADSL MODEM,这样就没有路由功能,实际上就是不鼓励用户"一线多机"。但是,现在的ADSL接入设备生产商竞争实在激烈,所以ADSL MODEM已基本停产,而转生产ADSL ROUTER,这就是现在所称的大多数的ADSL MODEM都"带路由"的原因,也就是ADSL接入设备基本是ADSL ROUTER.

    由于组网方案的不同,ADSL ROUTER就有了桥接模式和路由模式的工作模式。若是有少量客户机的家庭用户或SOHO用户,就可以直接用PPPOE ROUTED――路由模式,由ADSL ROUTER来进行PPPoE拨号并进行路由。也可以用RFC 1483 BRIDGED,然后接入PC,在PC上运行PPPOE拨号软件进行拨号,或接入宽带路由器,由宽带路由器的内置PPPOE拨号工具进行拨号。

    若是在多用户环境,客户机的数量较多时,如:网吧、企业、社区,往往是ADSL ROUTER 加宽带路由器的组网形式,这时多数会让ADSL ROUTER工作在桥接模式下,由宽带路由器来进行拨号功能,并承担路由的工作,这是因为ADSL ROUTER的路由能力较低,在处理大数量客户机的路由请求时会出现性能下降或产生死机故障。所以说,桥接模式和路由模式其实是针对于ADSL ROUTER来说的。

    什么是桥接模式

    ADSL ROUTER桥接模式有个正式专业的名称叫做RFC1483 桥接。RFC1483标准是为了实现在网络层上多协议数据包在ATM网络上封装传送而制定的,现已被广泛用于ATM技术中,成为在ATM网络上处理多协议数据包的封装标准。

    RFC1483仿真了以太网的桥接功能,它在数据链路层上对网络层的数据包进行LLC/SNAP的封装。在ADSL Modem中完成对以太网帧的RFC1483 ATM封装后,通过用户端和局端网络的PVC永久虚电路完成数据包的透明传输。ADSL的RFC1483桥接接入方式是ADSL宽带接入的最基本形式,也成为其它接入方式的基础,一般的ADSL ROUTER出厂也默认在桥接方式下。

    ADSL ROUTER出厂初始值为用于单台电脑的桥接器方式,也就是它的当前工作模式置于"BRIDGE ENABLE(桥接使能)"。在纯桥接模式下,ADSL ROUTER只是一个普通网桥,其功能较简单。通常需要一个代理服务器或网关设备将局域网中的通信汇聚起来再连接到外部网络上。需在代理服务器或网关设备上运行PPPoE拨号软件。桥接方式可以由局方分配固定IP,也可以配合配合拨号软件可设置为自动获取,或是分配固定IP需要在PC端设置。

    什么是路由模式

    ADSL ROUTER路由模式一般指的是ADSL ROUTER在"ROUTER ENABLE(路由使能)"的工作模式下,它具有PPPOE拨号、NAT、RIP-1等少量路由功能。

    PPPoE全称是Point to Point Protocol over Ethernet(基于局域网的点对点通讯协议)。它基于两个广泛接受的标准即:局域网Ethernet和PPP点对点拨号协议。在ADSL ROUTER中采用RFC1483的桥接封装方式对终端发出的PPP包进行LLC/SNAP封装后,通过连结两端的PVC在ADSL Modem与网络侧的宽带接入服务器之间建立连接,实现PPP的动态接入。对于服务商来说不需要花费巨资来做大面积改造,设置IP地址绑定用户等来支持专线方 式。这就使得PPPoE在宽带接入服务中比其他协议更具有优势。因此逐渐成为宽带上网的最佳选择。

    在路由模式下,ADSL ROUTER是一个独立的准系统,它自己PPPOE拨号并做NAT,成为一台独立的网关,不需要一台机器专门来开机并设置共享上网功能来为其他人做网关,或不需要宽带路由器来做网关,直接与局域网交换机连接就可以共享上网了。开启路由的好处:(1)不必专门使用一台电脑做服务器,任何一台电脑开机都可上网。(2)惟一的IP地址由ADSL ROUTER获得,外部发起的攻击全部作用于ADSL ROUTER上,可在一定程度上保护共享上网的电脑。

    ADSL ROUTER路由模式启用路由模式,可以省却代理服务器和拨号软件或宽带路由器。但是,由于硬件条件的限制,ADSL路由能力只适用于仅有几台电脑的共享应用,如家庭、宿舍等超小型网络。而对于企业动辄几十台,甚至上百台的应用状况,ADSL路由就难以胜任了。在企业环境下,在ADSL运行在路由模式下,可能会出现一些问题,如:频繁出现ADSL 链路断开重连;ADSL 大猫死掉,须重启。

    ADSL路由器和宽带路由器在路由方面存在较大的性能差异。产生此现象的原因是在硬件结构上。ADSL路由器的CPU芯片可能会是低端的网络处理器,如:ARM7等,主频仅为50 Mhz,SDRAM内存也很小。现在主流宽带路由器,CPU主频就高达100 Mhz以上,SDRAM内存16M以上。在软件功能上也存在处理能力的差别,SESSION(会话)容量比宽带路由器少很多。

    在其他的更高级的功能上,宽带路由器更添加了如SPI防火墙、DOS防范、IP过滤等安全机制以及 DHCP、DMZ、虚拟服务器、DDNS等等功能,这些ADSL MODEM根本没有的。

    在一定规模的网络应用中,用ADSL ROUTER作路由是勉为其难的,性能和功能有限,而应选用"更专业"的宽带路由器。除了家庭、SOHO等超小型网络环境,正确的思路应该是――ADSL ROUTER老老实实在桥模式下做接入,用宽带路由器跑路由和安全机制,并实现其它特殊应用。

    总结:

    对于目前国内宽带接入的主力军,ADSL技术有很多种线路封装方式,而从这些封装方式中引申出了两种所谓的工作模式:桥接模式和路由模式。面这两个模式,部分网友会无所适从,不知自己该选择哪种工作模式。其实对于国内普遍的ADSL PPPoE虚拟拨号,这两个模式都可以用,只是由于组网规模上的差异而应该采用何种工作模式会更好。

    总的来说:若是家庭及SOHO型微小组网,建议采用路由工作模式;若是网吧、学校、企业、社区等大型组网,建议采用桥接模式,再加宽带路由器来执行PPPoE虚拟拨号和路由功能。

Firefox|Top 10 Firefox Plugins for SEO - SEO Tutorials

Top 10 Firefox Plugins for SEO - SEO Tutorials

Top 10 Firefox Plugins for SEO

Firefox is a great browser for SEOs and web developers. Due to the fact it is open source, many developers have written plugins and add-ons for the software to do a huge variety of tasks.

Here is my list of the top 10 plugins I use regularly:

  1. Web Developer Toolbar - essential for web designers and developers
  2. ColorZilla - great tool for getting hex and RGB numbers for colours from graphics
  3. Google Toolbar - search Google web, images, news etc. Also gives link to cached page, and backward links
  4. SEO for Firefox - from SEOBook, gives instant backlink data from a number of sources
  5. IE Tab - embeds IE into Firefox, allowing for easy comparison of site designs between the two
  6. SEOpen - useful backlink and pages indexed check, as well as server headers, robots.txt and whois lookup
  7. Google Icon - simple plugin that adds favicons to Google results. It makes searching through the SERPs that little bit easier!
  8. SearchStatus - another useful tool with backlink, Archive.org and whois checks
  9. Professor X - displays a page's head information without viewing the source code
  10. Screengrab - very useful tool for capturing whole pages as an image

Other useful plugins:

  • HTML Validator - adds HTML validation to the browser
  • ListZilla - another simple but useful tool, this one outputs all your plugins as HTML, allowing you to keep a backup (or indeed write interesting posts about all the plugins you have!)
  • Live HTTP Headers - adds a "Headers" tab to the View Page Info menu
  • Live PageRank - adds a little green bar to the bottom of your browser
  • PDF Download - helps with PDF download management
  • Sage - simple RSS and ATOM reader
  • Server Spy - gives info about what type of server a site is hosted on
  • Show IP - shows IP address of current page
  • Lynx Viewer - shows what the current page would look like in Lynx, a text-mode browser

Sunday, March 9, 2008

WLS| 关于配置Weblogic的NodeManager服务

关于配置Weblogic的NodeManager服务 - David.Turing's blog - BlogJava

我看到一些文章,包括Steve Roth写的<<结合使用WebLogic Node Manager Service和WebLogic Portal>>,文章可能过于复杂,让一些Weblogic使用者有点头晕,于是增加一篇随笔,介绍一下配置NodeManager服务的简单方法。

如果你还没构建自己的证书库,则有点麻烦,因为在这里,我可能使用了openssl和keytool。使用NodeManager的远程启动Weblogic的方式,你需要配置SSL。因为AdminServer想启动远程的ManagedServer,需要通过SSL来跟远程的NodeManager握手。

假定我们规划一个这样的域,叫做nodemanagerdomain
这个域中,
Weblogic实例   -----     机器
AdminServer      -----    sourcesite
m1                     -----     sourcesite
m2                     -----     destsite
AdminServer最终想做的事情是启动m1实例和m2实例,他们三个实例都必须配置SSL。
通过Custom Identity and Custom Trust 方式,配置好这三个实例,确保他们之间的握手是没有问题的,配置的动作全部都在AdminServer上进行,包括
1)在AdminServer上, new 2个server, m1, m2
2)在AdminServer上,new2个machine, sourcesite和destsite
3)在sourcesite中加入m1, 在destsite中加入m2
sourcesite配置如下:
Listen Address:   sourcesite
  The host name or IP address where Node Manager listens for connection requests. 
   Listen Port:  5555
  The port number (between 0 and 65534) where Node Manager listens for connection requests. 

destsite配置如下:
Listen Address:   destsite
  The host name or IP address where Node Manager listens for connection requests. 
   Listen Port:  5555
  The port number (between 0 and 65534) where Node Manager listens for connection requests. 

OK,现在请把着两个nodemanager启动起来!

NodeManager是作为一个操作系统的监听进程运行的,它主要负责听从
%BEA_HOME%\weblogic81\common\nodemanager\nodemanager.properties中指定的机器上的AdminServer发送过来的命令。
比如,NodeManager可能会独自启动一个managedServer的实例,类似命令
startManagedWebLogic.cmd 
< server_name >  http://adminserver:7001

所以,目前,在远程机器上,你只需要做两件事情即可
1,配置nodemanager.properties
配置下面的内容
CustomTrustKeyStorePassPhrase=weblogic
KeyStores=CustomIdentityAndCustomTrust
CustomIdentityKeyStoreFileName=destsite.jks
CustomIdentityKeyStorePassPhrase=weblogic
PropertiesVersion=8.1
CustomIdentityAlias=destsite
CustomTrustKeyStoreFileName=cs.jks
CustomIdentityPrivateKeyPassPhrase=weblogic
ReverseDnsEnabled=true
CustomIdentityKeyStoreType=JKS

2, 配置nodemanager.hosts
增加AdminServer所在的机器名,比如,当前机器是destsite要听从sourcesite上的AdminServer的指挥,则加入
sourcesite

现在,启动NodeManager服务即可
在Windows上,假如你没有创建NodeManager服务,可以编辑一下
运行C:\bea814\weblogic81\server\bin\installNodeMgrSvc.cmd weblogic weblogic
即可生成一个NodeManager服务到Windows的服务列表里面,然后设为自动启动即可。
Windows好办很多,很多时候,我自己的习惯是不使用Windows服务方式,而是启动命令行Dos窗口,
方便观察啊!
C:\bea814\weblogic81\server\bin>startNodeManager.cmd

在Unix上,将sh startNodeManager.sh destsite 5555加入启动脚本即可,或者索性直接运行它即可。

配置完两台机器的nodemanager并且启动它们後,我们就可以用AdminServer(在sourcesite上面)来控制它了。
如果adminserver没法跟nodemanager进行SSL握手,它会报下面的错误
[NodeManager:300037]The node manager at host destsite and port 5555 seems to be down. Start the node manager and rerun the command.
假定我现在已经在两台机器都重启了两个NodeManager服务了,并且假设SSL握手已经成功(这个假设对配置SSL不熟悉的人来说有点难度)。
假定配置好後,我们尝试点击Machine destsite的monitor tab的 Node Manager Status 
则会出现下面的错误:
[[NodeManager:300033]Could not execute command ping on the node manager. Reason: weblogic.nodemanager.NodeManagerException: [NodeManager is not configured to receive commands from host : /192.168.1.111. Please update the trusted hosts file : nodemanager.hosts of the node manager by adding the hostname or ip address of /192.168.1.111> ].]
这是因为你AdminServer要连destsite的时候,destsite上正在5555监听的进程会查阅当前的nodemanager.hosts文件,看里面AdminServer所在的主机是否在nodemanager.hosts里面声明了,如果没有,则报上面的错,即,不让不信任的机器来往destsite的机器上nodemanager服务执行start/stop操作。
现在,往destsite的nodemanager.hosts里面新增一个sourcesite,即可。
这次,刷新Node Manager Status ,会看到
This page allows you to view current status information for the Node Manager.

State : RUNNING
BEA.home : null
weblogic.nodemanager.javaHome : C:\bea814\jdk142_05
weblogic.nodemanager.listenAddress : *.*
weblogic.nodemanager.listenPort : 5555
CLASSPATH : .;C:\ bea814\ jdk142_05\ lib\ tools.jar;C:\ bea814\ WEBLOG~1\ server\ lib\ weblogic_sp.jar;C:\ bea814\ WEBLOG~1\ server\ lib\ weblogic.jar

证明SSL握手已经成功,AdminServer可以控制destsite的nodemanager并要求它来启动m2的实例。

剩下的,看看如何配置sourcesite.jks,destsite.jks以及cs.jks。
这三个jks的关系是:
在sourcesite机器上的adminserver实例使用sourcesite.jks和cs.jks
在sourcesite机器上的m1实例使用sourcesite.jks和cs.jks
在destsite机器上的m2实例使用destsite.jks和cs.jks
在weblogic的Keystore配置中,sourcesite.jks和destsite.jks是配置为Identity部分,而cs.jks则作为Trust部分,
相关的配置SSL的方法请参见http://www.blogjava.net/security/archive/2005/11/28/21593.html

Thursday, March 6, 2008

SSH|public key/private key

About SSH

What is Secure Shell (SSH)

See WikiPedia if you need a full rundown Wikipedia SSH page (http://en.wikipedia.org/wiki/Secure_Shell) Essentially it is an protocol for connecting to and executing commands on a remote system, using a secure encrypted tunnel.

What is SFTP

SFTP is a File Transfer Protocol that uses an SSH to authenticate and encrypt its traffic. It is essentially a sub-service of the SSH server.

Why not use FTP or RSH etc

Both FTP and RSH use no encryption and pass passwords over the network in plain text. This makes it possible for the passwords to be captured in a number of ways, which is obviously bad for the users and the systems security. Therefore whenever possible SSH/SFTP should be used for file transfers and remote connections - use of FTP or RSH in legacy applications requires Infosec approval.

What are SSH keys?

One of the ways SSH improves security is to identify a user by use of "key". The benefit of a key is that at no time does a password or even the key traverse the network, a challenge response mechanism is used to validate that the incoming user is who they say they are.

This works because there is a "private" and a "public" key. Basically the client proves who they are by exchanging values with the server using the public keys. The client uses the private key to provide a response which validates that it has the private key associated with the public key. When the server validates a correct response, it allows access.

By using keys, there is no need to provide passwords, which allows non-interactive or passwordless interactive logins.

SSH Versions

In UNIX, there are two main flavours of SSH in common usage - OpenSSH and Secure Shell (commercial SSH). Both are compatible, but each is slightly different in syntax, file formats and configuration. The simple way to tell is once you are on a system, use the "ssh -V" command to identify what version you are using.

The below sections outline the basic usage of each from a user perspective, as well as how to work between the two versions when the need arises.

OpenSSH

Identification

OpenSSH is the more common version and "ssh -V" will have either OpenSSH or OpenSSL in the output:

$ ssh -V
Sun_SSH_1.1, SSH protocols 1.5/2.0, OpenSSL 0x0090704f

$ ssh -V
OpenSSH_3.6.1p2, SSH protocols 1.5/2.0, OpenSSL 0x0090701f

Configuration

OpenSSH stores its configuration files under the ".ssh" directory of the users home directory.

By default, it will identify a user using the keyfiles "~/.ssh/id_dsa" and "~/.ssh/id_rsa"

It will validate an incoming user by matching public keys stored in the "~/.ssh/authorized_keys" file.

Creating Keys

To create a key using OpenSSH, use the ssh-keygen command. The command below says to create a key using DSA encryption of 1024 bit strength with no passphrase (-N) to the file id_dsa:

$ cd ~/.ssh
$ ssh-keygen -t dsa -b 1024 -N -f id_dsa

You will see two files created - and id_dsa and an id_dsa.pub. The id_dsa will now be used by the ssh command to attempt to authenticate you to other servers.

If you have both RSA and DSA keys created, it will try them both.

Allowing Access

To allow a remote user to login to your account using SSH, you simply need to append their public key to your ~/.ssh/authorized_keys file. For example:

$ cat bobskey.pub >> ~/.ssh/authorized_keys

Be sure the public key is in the OpenSSH format however. If it is in the SecureSSH format, use the ssh-keygen command to convert it:

$ ssh-keygen -i -f secsshkey.pub > opensshkey.pub

You can then append the converted key to the authorized_keys file

Secure SSH

Identification

Secure SSH is a commercially produced SSH implementation. In its version output it will not reference OpenSSL and generally says a vendor:

$ ssh -V
ssh2: SSH Secure Shell 2.4.0 on alphaev56-dec-osf4.0e

$ ssh -V
ssh: SSH Tectia Server 4.0.5 on powerpc-ibm-aix5.1.0.0

Configuration

Secure SSH stores its configuration under the ".ssh2" directory of a users home directory.

By default it identifies a user using key files listed the "~/.ssh2/identification" file.

It will validate an incoming user by matching public key files listed in the "~/.ssh2/authorization" file.

Creating Keys

To create a key using Secure SSH, again use the ssh-keygen command. The command below says to create a key using DSA encryption of 1024 bit strength with no passphrase (-P):

$ ssh-keygen -t dsa -b 1024 -P
Generating 1024-bit dsa key pair
5 oOOo.oOo.oOo
Key generated.
1024-bit dsa, root@o9030004, Thu May 31 2007 13:12:37
Private key saved to //.ssh2/id_dsa_1024_a
Public key saved to //.ssh2/id_dsa_1024_a.pub

To use this key to authenticate to remote servers, append the filename to the identification file as such:

$ echo "Key id_dsa_1024_a" >> ~/.ssh2/identification

You can append multiple lines to this file and the SSH client will attempt them in order.

Allowing Access

To allow remote access, you need to copy the public key file (i.e id_dsa_1024_a.pub) to the remote system and place it under the users .ssh2 directory. You then need to list the key file in the ~/.ssh2/authorization file as such:

$ echo "Key  id_dsa_1024_a.pub" >> ~/.ssh2/authorization

Generally it is helpful to identify the server and user that the key is from by the filename, for example "cdun1410-ipg_as.pub" so you know what file is what.

If you are copying the public key from an OpenSSH system, then you need to first convert it to the SECSSH format by using the ssh-keygen command on the OpenSSH system:

openssh$ ssh-keygen -e -f id_dsa.pub > securessh.pub

You can then copy the resulting key and place on the Secure SSH system.

Working between Secure and OpenSSH

Both versions are just as secure, it just happens that the commercial version is made by a company called "Secure Communications". Both are also compatible and able to exchange keys provided that, as above, you convert the key files for use on the systems as needed.

The simple way to tell if you have a SecureSSH or OpenSSH key file is by viewing it.

A SecureSSH key file will have a "BEGIN SSH2" and "END SSH2" line surrounding the actual key text.

An OpenSSH private key will have BEGIN <keytype> PRIVATE KEY" around the key, and the public keys begin with either "ssh-dss" or "ssh-rsa" followed by the key text in a single line.

Common Issues

As a first step, try using ssh with the "-v" flag to get more verbose details on what the SSH client is attempting to do. Pay attention to what key files it attempts to use, and the responses from the remote server.

User Accounts

Even though you may authenticate to a server correctly, SSH is still at the mercy of the user account on the remote system. If the account is locked, expired or otherwise inaccessable it will appear as if the SSH connection is simply disconnecting.

If you can login using a password, then the account is ok. Most likely you have an issue with your configuration or key files as described below.

Key location

As a first step, make sure you are not using the wrong configuration directory. OpenSSH will not look at a ~/.ssh2 directory, and Commercial SSH wont look at a ~/.ssh directory.

Key Formats

Ensure that the key files have been converted as appropriate on the server system. See the above sections for details on conversion.

File Permissions

One of the more common gotchas with SSH is that it is militantly pedantic about file permissions. If the file permissions are not secure enough, SSH will ignore the key completely. This applies to the users configuration directory (~/.ssh or ~/.ssh2) as well as the key files. If the users home directory or the .ssh directory is writable by anyone other than the user, SSH will ignore it and all its contents competely. This applies to both the SSH client and the SSH server.

Here is what your permissions should be:

  • user home directory - 0755
  • ssh directory - 0755
  • private key files - 0400
  • public key files - 0644
  • other config files - 0644

As a first step, these permissions should be validated and set on both the client and the server to ensure that the SSH command is not ignoring your key files.

Troubleshooting


  1. Ensure you can login to the remote system interactivly with a password - if you cannot, you have account issues and should raise a Clarify case to the administrators of the system you are connecting to.
  2. Verify you have created the correct private and public keys on the client system (i.e the system you are initiating the connection from).
  3. Verify the permissions of those files are correct
  4. use ssh -v and verify that the ssh client is attempting to use the key files
  5. On the remote system, verify the public keys are installed, are converted to the correct format, in their correct locations and have the correct permissions
  6. If you still cannot connect, try from another system that you know does work or has worked, to ensure that there is not some other change on the server system preventing your connection
  7. If you get to here, you should raise a clarify case to your systems administration group for investigation

============================================================

SSH Public Key Authentication

The original artical is at http://bumblebee.lcs.mit.edu/ssh2/

Basic Idea

No-password authentication works because of public key crypto. Let’s say you have a local machine Ooga and a remote machine Booga. You want to be able to ssh from Ooga to Booga without having to enter your password. First you generate a public/private RSA key pair on Ooga. Then you send your public key to Booga, so that Booga knows that Ooga’s key belongs to a list of authorized keys. Then when you try to ssh from Ooga to Booga, RSA authentication is performed automagically.

Here are detailed steps on how to do this.

NOTE: The following examples and scenarios assume you are creating only a single key, e.g. one RSA key or one DSA key. If it turns out that you’ve created both keys on your (client) system, then you will need to send both of them to the SSH/SSH2 server; otherwise, you may still be asked to enter a passphrase. Thanks to Steve McCarthy for pointing this out.

ssh1

If you’re using ssh1, then do this:

 ooga% ssh-keygen -f ~/.ssh/identity

This will generate a public/private rsa1 key pair. When it asks you to enter your passphrase, just hit return (i.e. leave it empty). Now you need to send your public key to the remote server.

 ooga% cd .ssh
ooga% scp identity.pub user@booga:~/.ssh

Now you need to log into Booga and add Ooga’s public key to Booga’s list of authorized keys.

 ooga% ssh user@booga

booga% cd .ssh
booga% cat identity.pub >> authorized_keys
booga% chmod 640 authorized_keys
booga% rm -f identity.pub

That’s it! You can now ssh from Ooga to Booga without entering your password.

ssh2

It’s harder for ssh2. There are two common implementations of ssh2: OpenSSH and SSH2. Let’s say we want to ssh from Ooga to Booga. If Ooga and Booga both run the same implementation then it’s easy. Otherwise, we need to do some extra work to make them talk to each other properly.

My particular situation is that my local machine is running Windows 2000 with the Cygwin tools and OpenSSH 3.2.x. The remote machines may either have OpenSSH or SSH2. I’ll cover these two cases below.

ssh2: Ooga = OpenSSH, Booga = OpenSSH

First, generate a public/private DSA key pair on Ooga.

 ooga% ssh-keygen -t dsa -f ~/.ssh/id_dsa

When you are asked for a passphrase, leave it empty. Now send the public key to Booga.

 ooga% cd .ssh
ooga% scp id_dsa.pub user@booga:~/.ssh

Next, log in to Booga and add the public key to the list of authorized keys.

 ooga% ssh user@booga

booga% cd .ssh
booga% cat id_dsa.pub >> authorized_keys2
booga% chmod 640 authorized_keys2
booga% rm -f id_dsa.pub

Note that the filename is authorized_keys2, not authorized_keys. That’s it; you’re ready to ssh from Ooga to Booga without having to enter a password.

ssh2: Ooga = OpenSSH, Booga = SSH2

First, generate a public/private DSA key pair on Ooga.

 ooga% ssh-keygen -t dsa -f ~/.ssh/id_dsa

When you are asked for a passphrase, leave it empty. This key is stored in a format that OpenSSH can use, but SSH2 cannot. You need to export the key to a format that SSH2 understands.

 ooga% ssh-keygen -e -f .ssh/id_dsa.pub > id_dsa_ssh2_ooga.pub

Note: the exact flags you need to specify may differ in your case. Check the man pages if the line above doesn’t work. Now send the exported public key to Booga.

 ooga% scp id_dsa_ssh2_ooga.pub user@booga:~/.ssh2/

Note: the target directory is .ssh2, not .ssh. Next, log in to Booga and add the public key to the list of authorized keys.

 ooga% ssh user@booga

booga% cd .ssh2
booga% cat >> authorization
key id_dsa_ssh2_ooga.pub

booga% chmod 640 authorization

For SSH2, there is an authorization file in which you list the file names of the authorized public keys. Note that this step is different than the case in which Booga is running OpenSSH. Now you are ready to ssh from Ooga to Booga without having to enter a password.

Monday, March 3, 2008

Curl|cURL - Manual - Sent Using Google Toolbar

cURL - Manual

Manual -- curl usage explained

Related:
Man Page
FAQ
LATEST VERSION     You always find news about what's going on as well as the latest versions   from the curl web pages, located at:           http://curl.haxx.se   SIMPLE USAGE     Get the main page from Netscape's web-server:           curl http://www.netscape.com/     Get the README file the user's home directory at funet's ftp-server:           curl ftp://ftp.funet.fi/README     Get a web page from a server using port 8000:           curl http://www.weirdserver.com:8000/     Get a list of a directory of an FTP site:           curl ftp://cool.haxx.se/     Get the definition of curl from a dictionary:           curl dict://dict.org/m:curl     Fetch two documents at once:           curl ftp://cool.haxx.se/ http://www.weirdserver.com:8000/     Get a file off an FTPS server:           curl ftps://files.are.secure.com/secrets.txt     or use the more appropriate FTPS way to get the same file:           curl --ftp-ssl ftp://files.are.secure.com/secrets.txt     Get a file from an SSH server using SFTP:           curl -u username sftp://shell.example.com/etc/issue     Get a file from an SSH server using SCP using a private key to authenticate:           curl -u username: --key ~/.ssh/id_dsa --pubkey ~/.ssh/id_dsa.pub         	scp://shell.example.com/~/personal.txt     DOWNLOAD TO A FILE     Get a web page and store in a local file:           curl -o thatpage.html http://www.netscape.com/     Get a web page and store in a local file, make the local file get the name   of the remote document (if no file name part is specified in the URL, this   will fail):           curl -O http://www.netscape.com/index.html     Fetch two files and store them with their remote names:           curl -O www.haxx.se/index.html -O curl.haxx.se/download.html   USING PASSWORDS    FTP      To ftp files using name+passwd, include them in the URL like:           curl ftp://name:passwd@machine.domain:port/full/path/to/file      or specify them with the -u flag like           curl -u name:passwd ftp://machine.domain:port/full/path/to/file    FTPS      It is just like for FTP, but you may also want to specify and use    SSL-specific options for certificates etc.      Note that using FTPS:// as prefix is the "implicit" way as described in the    standards while the recommended "explicit" way is done by using FTP:// and    the --ftp-ssl option.    HTTP      Curl also supports user and password in HTTP URLs, thus you can pick a file    like:           curl http://name:passwd@machine.domain/full/path/to/file      or specify user and password separately like in           curl -u name:passwd http://machine.domain/full/path/to/file      HTTP offers many different methods of authentication and curl supports    several: Basic, Digest, NTLM and Negotiate. Without telling which method to    use, curl defaults to Basic. You can also ask curl to pick the most secure    ones out of the ones that the server accepts for the given URL, by using    --anyauth.      NOTE! Since HTTP URLs don't support user and password, you can't use that    style when using Curl via a proxy. You _must_ use the -u style fetch    during such circumstances.    HTTPS      Probably most commonly used with private certificates, as explained below.   PROXY    Get an ftp file using a proxy named my-proxy that uses port 888:           curl -x my-proxy:888 ftp://ftp.leachsite.com/README    Get a file from a HTTP server that requires user and password, using the  same proxy as above:           curl -u user:passwd -x my-proxy:888 http://www.get.this/    Some proxies require special authentication. Specify by using -U as above:           curl -U user:passwd -x my-proxy:888 http://www.get.this/    curl also supports SOCKS4 and SOCKS5 proxies with --socks4 and --socks5.    See also the environment variables Curl support that offer further proxy  control.   RANGES     With HTTP 1.1 byte-ranges were introduced. Using this, a client can request   to get only one or more subparts of a specified document. Curl supports   this with the -r flag.     Get the first 100 bytes of a document:           curl -r 0-99 http://www.get.this/     Get the last 500 bytes of a document:           curl -r -500 http://www.get.this/     Curl also supports simple ranges for FTP files as well. Then you can only   specify start and stop position.     Get the first 100 bytes of a document using FTP:           curl -r 0-99 ftp://www.get.this/README   UPLOADING    FTP     Upload all data on stdin to a specified ftp site:           curl -T - ftp://ftp.upload.com/myfile     Upload data from a specified file, login with user and password:           curl -T uploadfile -u user:passwd ftp://ftp.upload.com/myfile     Upload a local file to the remote site, and use the local file name remote   too:         curl -T uploadfile -u user:passwd ftp://ftp.upload.com/     Upload a local file to get appended to the remote file using ftp:           curl -T localfile -a ftp://ftp.upload.com/remotefile     Curl also supports ftp upload through a proxy, but only if the proxy is   configured to allow that kind of tunneling. If it does, you can run curl in   a fashion similar to:           curl --proxytunnel -x proxy:port -T localfile ftp.upload.com    HTTP     Upload all data on stdin to a specified http site:           curl -T - http://www.upload.com/myfile     Note that the http server must have been configured to accept PUT before   this can be done successfully.     For other ways to do http data upload, see the POST section below.   VERBOSE / DEBUG     If curl fails where it isn't supposed to, if the servers don't let you in,   if you can't understand the responses: use the -v flag to get verbose   fetching. Curl will output lots of info and what it sends and receives in   order to let the user see all client-server interaction (but it won't show   you the actual data).           curl -v ftp://ftp.upload.com/     To get even more details and information on what curl does, try using the   --trace or --trace-ascii options with a given file name to log to, like   this:           curl --trace trace.txt www.haxx.se   DETAILED INFORMATION     Different protocols provide different ways of getting detailed information   about specific files/documents. To get curl to show detailed information   about a single file, you should use -I/--head option. It displays all   available info on a single file for HTTP and FTP. The HTTP information is a   lot more extensive.     For HTTP, you can get the header information (the same as -I would show)   shown before the data by using -i/--include. Curl understands the   -D/--dump-header option when getting files from both FTP and HTTP, and it   will then store the headers in the specified file.     Store the HTTP headers in a separate file (headers.txt in the example):           curl --dump-header headers.txt curl.haxx.se     Note that headers stored in a separate file can be very useful at a later   time if you want curl to use cookies sent by the server. More about that in   the cookies section.   POST (HTTP)     It's easy to post data using curl. This is done using the -d <data>   option.  The post data must be urlencoded.     Post a simple "name" and "phone" guestbook.           curl -d "name=Rafael%20Sagula&phone=3320780"                 http://www.where.com/guest.cgi     How to post a form with curl, lesson #1:     Dig out all the <input> tags in the form that you want to fill in. (There's   a perl program called formfind.pl on the curl site that helps with this).     If there's a "normal" post, you use -d to post. -d takes a full "post   string", which is in the format           <variable1>=<data1>&<variable2>=<data2>&...     The 'variable' names are the names set with "name=" in the <input> tags, and   the data is the contents you want to fill in for the inputs. The data *must*   be properly URL encoded. That means you replace space with + and that you   write weird letters with %XX where XX is the hexadecimal representation of   the letter's ASCII code.     Example:     (page located at http://www.formpost.com/getthis/           <form action="post.cgi" method="post">         <input name=user size=10>         <input name=pass type=password size=10>         <input name=id type=hidden value="blablabla">         <input name=ding value="submit">         </form>     We want to enter user 'foobar' with password '12345'.     To post to this, you enter a curl command line like:           curl -d "user=foobar&pass=12345&id=blablabla&ding=submit"  (continues)           http://www.formpost.com/getthis/post.cgi       While -d uses the application/x-www-form-urlencoded mime-type, generally   understood by CGI's and similar, curl also supports the more capable   multipart/form-data type. This latter type supports things like file upload.     -F accepts parameters like -F "name=contents". If you want the contents to   be read from a file, use <@filename> as contents. When specifying a file,   you can also specify the file content type by appending ';type=<mime type>'   to the file name. You can also post the contents of several files in one   field.  For example, the field name 'coolfiles' is used to send three files,   with different content types using the following syntax:           curl -F "coolfiles=@fil1.gif;type=image/gif,fil2.txt,fil3.html"         http://www.post.com/postit.cgi     If the content-type is not specified, curl will try to guess from the file   extension (it only knows a few), or use the previously specified type (from   an earlier file if several files are specified in a list) or else it will   using the default type 'text/plain'.     Emulate a fill-in form with -F. Let's say you fill in three fields in a   form. One field is a file name which to post, one field is your name and one   field is a file description. We want to post the file we have written named   "cooltext.txt". To let curl do the posting of this data instead of your   favourite browser, you have to read the HTML source of the form page and   find the names of the input fields. In our example, the input field names   are 'file', 'yourname' and 'filedescription'.           curl -F "file=@cooltext.txt" -F "yourname=Daniel"              -F "filedescription=Cool text file with cool text inside"              http://www.post.com/postit.cgi     To send two files in one post you can do it in two ways:     1. Send multiple files in a single "field" with a single field name:         curl -F "pictures=@dog.gif,cat.gif"   2. Send two fields with two field names:           curl -F "docpicture=@dog.gif" -F "catpicture=@cat.gif"     To send a field value literally without interpreting a leading '@'   or '<', or an embedded ';type=', use --form-string instead of   -F. This is recommended when the value is obtained from a user or   some other unpredictable source. Under these circumstances, using   -F instead of --form-string would allow a user to trick curl into   uploading a file.   REFERRER     A HTTP request has the option to include information about which address   that referred to actual page.  Curl allows you to specify the   referrer to be used on the command line. It is especially useful to   fool or trick stupid servers or CGI scripts that rely on that information   being available or contain certain data.           curl -e www.coolsite.com http://www.showme.com/     NOTE: The referer field is defined in the HTTP spec to be a full URL.   USER AGENT     A HTTP request has the option to include information about the browser   that generated the request. Curl allows it to be specified on the command   line. It is especially useful to fool or trick stupid servers or CGI   scripts that only accept certain browsers.     Example:     curl -A 'Mozilla/3.0 (Win95; I)' http://www.nationsbank.com/     Other common strings:     'Mozilla/3.0 (Win95; I)'     Netscape Version 3 for Windows 95     'Mozilla/3.04 (Win95; U)'    Netscape Version 3 for Windows 95     'Mozilla/2.02 (OS/2; U)'     Netscape Version 2 for OS/2     'Mozilla/4.04 [en] (X11; U; AIX 4.2; Nav)'           NS for AIX     'Mozilla/4.05 [en] (X11; U; Linux 2.0.32 i586)'      NS for Linux     Note that Internet Explorer tries hard to be compatible in every way:     'Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)'    MSIE for W95     Mozilla is not the only possible User-Agent name:     'Konqueror/1.0'             KDE File Manager desktop client     'Lynx/2.7.1 libwww-FM/2.14' Lynx command line browser   COOKIES     Cookies are generally used by web servers to keep state information at the   client's side. The server sets cookies by sending a response line in the   headers that looks like 'Set-Cookie: <data>' where the data part then   typically contains a set of NAME=VALUE pairs (separated by semicolons ';'   like "NAME1=VALUE1; NAME2=VALUE2;"). The server can also specify for what   path the "cookie" should be used for (by specifying "path=value"), when the   cookie should expire ("expire=DATE"), for what domain to use it   ("domain=NAME") and if it should be used on secure connections only   ("secure").     If you've received a page from a server that contains a header like:         Set-Cookie: sessionid=boo123; path="/foo";     it means the server wants that first pair passed on when we get anything in   a path beginning with "/foo".     Example, get a page that wants my name passed in a cookie:           curl -b "name=Daniel" www.sillypage.com     Curl also has the ability to use previously received cookies in following   sessions. If you get cookies from a server and store them in a file in a   manner similar to:           curl --dump-header headers www.example.com     ... you can then in a second connect to that (or another) site, use the   cookies from the 'headers' file like:           curl -b headers www.example.com     While saving headers to a file is a working way to store cookies, it is   however error-prone and not the preferred way to do this. Instead, make curl   save the incoming cookies using the well-known netscape cookie format like   this:           curl -c cookies.txt www.example.com     Note that by specifying -b you enable the "cookie awareness" and with -L   you can make curl follow a location: (which often is used in combination   with cookies). So that if a site sends cookies and a location, you can   use a non-existing file to trigger the cookie awareness like:           curl -L -b empty.txt www.example.com     The file to read cookies from must be formatted using plain HTTP headers OR   as netscape's cookie file. Curl will determine what kind it is based on the   file contents.  In the above command, curl will parse the header and store   the cookies received from www.example.com.  curl will send to the server the   stored cookies which match the request as it follows the location.  The   file "empty.txt" may be a nonexistent file.     Alas, to both read and write cookies from a netscape cookie file, you can   set both -b and -c to use the same file:           curl -b cookies.txt -c cookies.txt www.example.com   PROGRESS METER     The progress meter exists to show a user that something actually is   happening. The different fields in the output have the following meaning:     % Total    % Received % Xferd  Average Speed          Time             Curr.                                  Dload  Upload Total    Current  Left    Speed   0  151M    0 38608    0     0   9406      0  4:41:43  0:00:04  4:41:39  9287     From left-to-right:    %             - percentage completed of the whole transfer    Total         - total size of the whole expected transfer    %             - percentage completed of the download    Received      - currently downloaded amount of bytes    %             - percentage completed of the upload    Xferd         - currently uploaded amount of bytes    Average Speed    Dload         - the average transfer speed of the download    Average Speed    Upload        - the average transfer speed of the upload    Time Total    - expected time to complete the operation    Time Current  - time passed since the invoke    Time Left     - expected time left to completion    Curr.Speed    - the average transfer speed the last 5 seconds (the first                    5 seconds of a transfer is based on less time of course.)     The -# option will display a totally different progress bar that doesn't   need much explanation!   SPEED LIMIT     Curl allows the user to set the transfer speed conditions that must be met   to let the transfer keep going. By using the switch -y and -Y you   can make curl abort transfers if the transfer speed is below the specified   lowest limit for a specified time.     To have curl abort the download if the speed is slower than 3000 bytes per   second for 1 minute, run:           curl -Y 3000 -y 60 www.far-away-site.com     This can very well be used in combination with the overall time limit, so   that the above operation must be completed in whole within 30 minutes:           curl -m 1800 -Y 3000 -y 60 www.far-away-site.com     Forcing curl not to transfer data faster than a given rate is also possible,   which might be useful if you're using a limited bandwidth connection and you   don't want your transfer to use all of it (sometimes referred to as   "bandwidth throttle").     Make curl transfer data no faster than 10 kilobytes per second:           curl --limit-rate 10K www.far-away-site.com       or           curl --limit-rate 10240 www.far-away-site.com     Or prevent curl from uploading data faster than 1 megabyte per second:           curl -T upload --limit-rate 1M ftp://uploadshereplease.com     When using the --limit-rate option, the transfer rate is regulated on a   per-second basis, which will cause the total transfer speed to become lower   than the given number. Sometimes of course substantially lower, if your   transfer stalls during periods.   CONFIG FILE     Curl automatically tries to read the .curlrc file (or _curlrc file on win32   systems) from the user's home dir on startup.     The config file could be made up with normal command line switches, but you   can also specify the long options without the dashes to make it more   readable. You can separate the options and the parameter with spaces, or   with = or :. Comments can be used within the file. If the first letter on a   line is a '#'-letter the rest of the line is treated as a comment.     If you want the parameter to contain spaces, you must inclose the entire   parameter within double quotes ("). Within those quotes, you specify a   quote as \".     NOTE: You must specify options and their arguments on the same line.     Example, set default time out and proxy in a config file:   #We  want a 30 minute timeout:         -m 1800 #. .. and we use a proxy for all accesses:         proxy = proxy.our.domain.com:8080     White spaces ARE significant at the end of lines, but all white spaces   leading up to the first characters of each line are ignored.     Prevent curl from reading the default file by using -q as the first command   line parameter, like:           curl -q www.thatsite.com     Force curl to get and display a local help page in case it is invoked   without URL by making a config file similar to:   #default  url to get         url = "http://help.with.curl.com/curlhelp.html"     You can specify another config file to be read by using the -K/--config   flag. If you set config file name to "-" it'll read the config from stdin,   which can be handy if you want to hide options from being visible in process   tables etc:           echo "user = user:passwd" | curl -K - http://that.secret.site.com   EXTRA HEADERS     When using curl in your own very special programs, you may end up needing   to pass on your own custom headers when getting a web page. You can do   this by using the -H flag.     Example, send the header "X-you-and-me: yes" to the server when getting a   page:           curl -H "X-you-and-me: yes" www.love.com     This can also be useful in case you want curl to send a different text in a   header than it normally does. The -H header you specify then replaces the   header curl would normally send. If you replace an internal header with an   empty one, you prevent that header from being sent. To prevent the Host:   header from being used:           curl -H "Host:" www.server.com   FTP and PATH NAMES     Do note that when getting files with the ftp:// URL, the given path is   relative the directory you enter. To get the file 'README' from your home   directory at your ftp site, do:           curl ftp://user:passwd@my.site.com/README     But if you want the README file from the root directory of that very same   site, you need to specify the absolute file name:           curl ftp://user:passwd@my.site.com//README     (I.e with an extra slash in front of the file name.)   FTP and firewalls     The FTP protocol requires one of the involved parties to open a second   connction as soon as data is about to get transfered. There are two ways to   do this.     The default way for curl is to issue the PASV command which causes the   server to open another port and await another connection performed by the   client. This is good if the client is behind a firewall that don't allow   incoming connections.           curl ftp.download.com     If the server for example, is behind a firewall that don't allow connections   on other ports than 21 (or if it just doesn't support the PASV command), the   other way to do it is to use the PORT command and instruct the server to   connect to the client on the given (as parameters to the PORT command) IP   number and port.     The -P flag to curl supports a few different options. Your machine may have   several IP-addresses and/or network interfaces and curl allows you to select   which of them to use. Default address can also be used:           curl -P - ftp.download.com     Download with PORT but use the IP address of our 'le0' interface (this does   not work on windows):           curl -P le0 ftp.download.com     Download with PORT but use 192.168.0.10 as our IP address to use:           curl -P 192.168.0.10 ftp.download.com   NETWORK INTERFACE     Get a web page from a server using a specified port for the interface:           curl --interface eth0:1 http://www.netscape.com/     or           curl --interface 192.168.1.10 http://www.netscape.com/   HTTPS     Secure HTTP requires SSL libraries to be installed and used when curl is   built. If that is done, curl is capable of retrieving and posting documents   using the HTTPS protocol.     Example:           curl https://www.secure-site.com     Curl is also capable of using your personal certificates to get/post files   from sites that require valid certificates. The only drawback is that the   certificate needs to be in PEM-format. PEM is a standard and open format to   store certificates with, but it is not used by the most commonly used   browsers (Netscape and MSIE both use the so called PKCS#12 format). If you   want curl to use the certificates you use with your (favourite) browser, you   may need to download/compile a converter that can convert your browser's   formatted certificates to PEM formatted ones. This kind of converter is   included in recent versions of OpenSSL, and for older versions Dr Stephen   N. Henson has written a patch for SSLeay that adds this functionality. You   can get his patch (that requires an SSLeay installation) from his site at:   http://www.drh-consultancy.demon.co.uk/     Example on how to automatically retrieve a document using a certificate with   a personal password:           curl -E /path/to/cert.pem:password https://secure.site.com/     If you neglect to specify the password on the command line, you will be   prompted for the correct password before any data can be received.     Many older SSL-servers have problems with SSLv3 or TLS, that newer versions   of OpenSSL etc is using, therefore it is sometimes useful to specify what   SSL-version curl should use. Use -3, -2 or -1 to specify that exact SSL   version to use (for SSLv3, SSLv2 or TLSv1 respectively):           curl -2 https://secure.site.com/     Otherwise, curl will first attempt to use v3 and then v2.     To use OpenSSL to convert your favourite browser's certificate into a PEM   formatted one that curl can use, do something like this (assuming netscape,   but IE is likely to work similarly):       You start with hitting the 'security' menu button in netscape.       Select 'certificates->yours' and then pick a certificate in the list       Press the 'export' button       enter your PIN code for the certs       select a proper place to save it       Run the 'openssl' application to convert the certificate. If you cd to the     openssl installation, you can do it like:   #. /apps/openssl pkcs12 -in [file you saved] -clcerts -out [PEMfile]     RESUMING FILE TRANSFERS    To continue a file transfer where it was previously aborted, curl supports  resume on http(s) downloads as well as ftp uploads and downloads.    Continue downloading a document:           curl -C - -o file ftp://ftp.server.com/path/file    Continue uploading a document(*1):           curl -C - -T file ftp://ftp.server.com/path/file    Continue downloading a document from a web server(*2):           curl -C - -o file http://www.server.com/    (*1) = This requires that the ftp server supports the non-standard command         SIZE. If it doesn't, curl will say so.    (*2) = This requires that the web server supports at least HTTP/1.1. If it         doesn't, curl will say so.   TIME CONDITIONS    HTTP allows a client to specify a time condition for the document it  requests. It is If-Modified-Since or If-Unmodified-Since. Curl allow you to  specify them with the -z/--time-cond flag.    For example, you can easily make a download that only gets performed if the  remote file is newer than a local copy. It would be made like:           curl -z local.html http://remote.server.com/remote.html    Or you can download a file only if the local file is newer than the remote  one. Do this by prepending the date string with a '-', as in:           curl -z -local.html http://remote.server.com/remote.html    You can specify a "free text" date as condition. Tell curl to only download  the file if it was updated since January 12, 2012:           curl -z "Jan 12 2012" http://remote.server.com/remote.html    Curl will then accept a wide range of date formats. You always make the date  check the other way around by prepending it with a dash '-'.   DICT     For fun try           curl dict://dict.org/m:curl         curl dict://dict.org/d:heisenbug:jargon         curl dict://dict.org/d:daniel:web1913     Aliases for 'm' are 'match' and 'find', and aliases for 'd' are 'define'   and 'lookup'. For example,           curl dict://dict.org/find:curl     Commands that break the URL description of the RFC (but not the DICT   protocol) are           curl dict://dict.org/show:db         curl dict://dict.org/show:strat     Authentication is still missing (but this is not required by the RFC)   LDAP     If you have installed the OpenLDAP library, curl can take advantage of it   and offer ldap:// support.     LDAP is a complex thing and writing an LDAP query is not an easy task. I do   advice you to dig up the syntax description for that elsewhere. Two places   that might suit you are:     Netscape's "Netscape Directory SDK 3.0 for C Programmer's Guide Chapter 10:   Working with LDAP URLs":   http://developer.netscape.com/docs/manuals/dirsdk/csdk30/url.htm     RFC 2255, "The LDAP URL Format" http://curl.haxx.se/rfc/rfc2255.txt     To show you an example, this is now I can get all people from my local LDAP   server that has a certain sub-domain in their email address:           curl -B "ldap://ldap.frontec.se/o=frontec??sub?mail=*sth.frontec.se"     If I want the same info in HTML format, I can get it by not using the -B   (enforce ASCII) flag.   ENVIRONMENT VARIABLES     Curl reads and understands the following environment variables:           http_proxy, HTTPS_PROXY, FTP_PROXY     They should be set for protocol-specific proxies. General proxy should be   set with         ALL_PROXY     A comma-separated list of host names that shouldn't go through any proxy is   set in (only an asterisk, '*' matches all hosts)           NO_PROXY     If a tail substring of the domain-path for a host matches one of these   strings, transactions with that node will not be proxied.       The usage of the -x/--proxy flag overrides the environment variables.   NETRC     Unix introduced the .netrc concept a long time ago. It is a way for a user   to specify name and password for commonly visited ftp sites in a file so   that you don't have to type them in each time you visit those sites. You   realize this is a big security risk if someone else gets hold of your   passwords, so therefore most unix programs won't read this file unless it is   only readable by yourself (curl doesn't care though).     Curl supports .netrc files if told so (using the -n/--netrc and   --netrc-optional options). This is not restricted to only ftp,   but curl can use it for all protocols where authentication is used.     A very simple .netrc file could look something like:           machine curl.haxx.se login iamdaniel password mysecret   CUSTOM OUTPUT     To better allow script programmers to get to know about the progress of   curl, the -w/--write-out option was introduced. Using this, you can specify   what information from the previous transfer you want to extract.     To display the amount of bytes downloaded together with some text and an   ending newline:           curl -w 'We downloaded %{size_download} bytes\n' www.download.com   KERBEROS FTP TRANSFER     Curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need   the kerberos package installed and used at curl build time for it to be   used.     First, get the krb-ticket the normal way, like with the kinit/kauth tool.   Then use curl in way similar to:           curl --krb private ftp://krb4site.com -u username:fakepwd     There's no use for a password on the -u switch, but a blank one will make   curl ask for one and you already entered the real password to kinit/kauth.   TELNET     The curl telnet support is basic and very easy to use. Curl passes all data   passed to it on stdin to the remote server. Connect to a remote telnet   server using a command line similar to:           curl telnet://remote.server.com     And enter the data to pass to the server on stdin. The result will be sent   to stdout or to the file you specify with -o.     You might want the -N/--no-buffer option to switch off the buffered output   for slow connections or similar.     Pass options to the telnet protocol negotiation, by using the -t option. To   tell the server we use a vt100 terminal, try something like:           curl -tTTYPE=vt100 telnet://remote.server.com     Other interesting options for it -t include:      - XDISPLOC=<X display> Sets the X display location.      - NEW_ENV=<var,val> Sets an environment variable.     NOTE: the telnet protocol does not specify any way to login with a specified   user and password so curl can't do that automatically. To do that, you need   to track when the login prompt is received and send the username and   password accordingly.   PERSISTENT CONNECTIONS     Specifying multiple files on a single command line will make curl transfer   all of them, one after the other in the specified order.     libcurl will attempt to use persistent connections for the transfers so that   the second transfer to the same host can use the same connection that was   already initiated and was left open in the previous transfer. This greatly   decreases connection time for all but the first transfer and it makes a far   better use of the network.     Note that curl cannot use persistent connections for transfers that are used   in subsequence curl invokes. Try to stuff as many URLs as possible on the   same command line if they are using the same host, as that'll make the   transfers faster. If you use a http proxy for file transfers, practically   all transfers will be persistent.   MULTIPLE TRANSFERS WITH A SINGLE COMMAND LINE     As is mentioned above, you can download multiple files with one command line   by simply adding more URLs. If you want those to get saved to a local file   instead of just printed to stdout, you need to add one save option for each   URL you specify. Note that this also goes for the -O option.     For example: get two files and use -O for the first and a custom file   name for the second:       curl -O http://url.com/file.txt ftp://ftp.com/moo.exe -o moo.jpg     You can also upload multiple files in a similar fashion:       curl -T local1 ftp://ftp.com/moo.exe -T local2 ftp://ftp.com/moo2.txt   MAILING LISTS     For your convenience, we have several open mailing lists to discuss curl,   its development and things relevant to this. Get all info at   http://curl.haxx.se/mail/. Some of the lists available are:     curl-users       Users of the command line tool. How to use it, what doesn't work, new     features, related tools, questions, news, installations, compilations,     running, porting etc.     curl-library       Developers using or developing libcurl. Bugs, extensions, improvements.     curl-announce       Low-traffic. Only receives announcements of new public versions. At worst,     that makes something like one or two mails per month, but usually only one     mail every second month.     curl-and-php       Using the curl functions in PHP. Everything curl with a PHP angle. Or PHP     with a curl angle.     curl-and-python       Python hackers using curl with or without the python binding pycurl.     Please direct curl questions, feature requests and trouble reports to one of   these mailing lists instead of mailing any individual.