Saturday, October 31, 2009

Software|6 款免费的图标编辑器

6 款免费的图标编辑器 - 基于 COMSHARP CMS

6 款免费的图标编辑器
Label作者: its|发布: 2009-10-30 (9:38)|阅读: 634|评论: 0|静态地址|内容源码

Web 设计中常用到图标,比如 Favicon,Windows 和 Mac 程序中也要用到图标,设计图标当然可以使用庞大的 Photoshop 或 Paint.NET,但也有一些免费的,简单工具可以随手拿来用,本文搜集了6款免费的图标编辑工具,有一些工具还可以从二进制文件中抽取图标。

Greenfish Icon Editor Pro

 

Greenfish Icon Editor Pro 是一款强大的图标编辑器,但非常小巧,解压后不到1.8MB。它支持层,可以创建动画图标,包含高质量的滤镜,如 Bevel (斜面),阴影以及光晕。


Greenfish Icon Editor Pro

IcoFX

 

IcoFX 是一款著名的图标编辑器,拥有很多功能,如,它支持带 PNG 压缩的 Vista 图标,支持批处理功能,除此之外,IcoFX 还支持高达 256x256  尺寸的高质量图标,并可以导入导出 bmp, jpg, gif, png 等格式的图片。


IcoFX

Sib Icon Editor

 

Sib Icon Editor 是 Windows 下免费图标工具,可以编辑图标,管理图标库,可以编辑 PNG 图标,可以直接从别的格式的图片粘贴素材,可以从 Windows 执行文件中抽取图标文件,甚至可以将 Mac 图标转换为 Windows 图标。

Sib Icon Editor

Stardock IconDeveloper

 

IconDeveloper 很容易帮你创建 Windows 图标,有两个版本,免费版和收费的增强版($19.95),对普通用户来说,免费版已经足够用。增强版可以将其它格式的图片转换为图标文件,可以调整颜色,色度和 Gamma 曲线,支持整个文件夹批处理。

Stardock IconDeveloper

aaICO Icon Editor

 

aaICO 是一款简单易用但很出色的图标编辑器。功能不是很多,界面也很简单,适合于那些单纯的图标设计者。

aaICO Icon Editor

LiquidIcon XP

 

LiquidIcon XP 包含一套标准图像编辑工具,包含图标抽取器,可以从 EXE 或 DLL 文件抽取图标,还包含图像反转,旋转,镜像等工具。

LiquidIcon XP

Wednesday, October 28, 2009

Python|How to Use UTF-8 with Python

How to Use UTF-8 with Python

How to Use UTF-8 with Python

Tim Bray describes why Unicode and UTF-8 are wonderful much better than I could, so go read that for an overview of what Unicode is, and why all your programs should support it. What I'm going to tell you is how to use Unicode, and specifically UTF-8, with one of the coolest programming languages, Python, but I have also written an introduction to Using Unicode in C/C++. Python has good support for Unicode, but there are a few tricks that you need to be aware of. I spent more than a few hours learning these tricks, and I'm hoping that by reading this you won't have to. This is a very quick and dirty introduction. If you need in depth knowledge, or need to learn about Unicode in Java or Windows, see Unicode for Programmers. [Updated 2005-09-01: Updated information about XML encoding declarations.]

The Basics

There are two types of strings in Python: byte strings and Unicode strings. As you may have guessed, a byte string is a sequence of bytes. When needed, Python uses your computer's default locale to convert the bytes into characters. On Mac OS X, the default locale is actually UTF-8, but everywhere else, the default is probably ASCII. This creates a byte string:

byteString = "hello world! (in my default locale)"

And this creates a Unicode string:

unicodeString = u"hello Unicode world!"

Convert a byte string into a Unicode string and back again:

s = "hello byte string" u = unicode( s ) backToBytes = u.encode() 

The previous code uses your default character set to perform the conversions. However, relying on the locale's character set is a bad idea, since your application is likely to break as soon as someone from Thailand tries to run it on their computer. In most cases it is probably better to explicitly specify the encoding of the string:

s = "hello normal string" u = unicode( s, "utf-8" ) backToBytes = u.encode( "utf-8" ) 

Now, the byte string s will be treated as a sequence of UTF-8 bytes to create the Unicode string u. The next line stores the UTF-8 representation of u in the byte string backToBytes.

Working With Unicode Strings

Thankfully, everything in Python is supposed to treat Unicode strings identically to byte strings. However, you need to be careful in your own code when testing to see if an object is a string. Do not do this:

if isinstance( s, str ): # BAD: Not true for Unicode strings!

Instead, use the generic string base class, basestring:

if isinstance( s, basestring ): # True for both Unicode and byte strings

Reading UTF-8 Files

You can manually convert strings that you read from files, however there is an easier way:

import codecs fileObj = codecs.open( "someFile", "r", "utf-8" ) u = fileObj.read() # Returns a Unicode string from the UTF-8 bytes in the file 

The codecs module will take care of all the conversions for you. You can also open a file for writing and it will convert the Unicode strings you pass in to write into whatever encoding you have chosen. However, take a look at the note below about the byte-order marker (BOM).

Working with XML and minidom

I use the minidom module for my XML needs mostly because I am familiar with it. Unfortunately, it only handles byte strings so you need to encode your Unicode strings before passing them to minidom functions. For example:

import xml.dom.minidom xmlData = u"<français>Comment ça va ? Très bien ?</français>" dom = xml.dom.minidom.parseString( xmlData ) 

The last line raises an exception: UnicodeEncodeError: 'ascii' codec can't encode character '\ue7' in position 5: ordinal not in range(128). To work around this error, encode the Unicode string into the appropriate format before passing it to minidom, like this:

import xml.dom.minidom xmlData = u"<français>Comment ça va ? Très bien ?</français>" dom = xml.dom.minidom.parseString( xmlData.encode( "utf-8" ) ) 

Minidom can handle any format of byte string, such as Latin-1 or UTF-16. However, it will only work reliably if the XML document has an encoding declaration (eg. <?xml version="1.0" encoding="Latin-1"?>). If the encoding declaration is missing, minidom assumes that it is UTF-8. In is a good habit to include an encoding declaration on all your XML documents, in order to guarantee compatability on all systems.

When you get XML out of minidom by calling dom.toxml() or dom.toprettyxml(), minidom returns a Unicode string. You can also pass in an additional encoding="utf-8" parameter to get an encoded byte string, perfect for writing out to a file.

The Byte-Order Marker (BOM)

UTF-8 files sometimes start with a byte-order marker (BOM) to indicate that they are encoded in UTF-8. This is commonly used on Windows. On Mac OS X, applications (eg. TextEdit) ignore the BOM and remove it if the file is saved again. The W3C HTML Validator warns that older applications may not be able to handle the BOM. Unicode effectively ignores the marker, so it should not matter when reading the file. You may wish to add this to the beginning of your files to determine if they are encoded in ASCII or UTF-8. The codecs module provides the constant for you to do this:

out = file( "someFile", "w" ) out.write( codecs.BOM_UTF8 ) out.write( unicodeString.encode( "utf-8" ) ) out.close() 

You need to be careful when using the BOM and UTF-8. Frankly, I think this is a bug in Python, but what do I know. Python will decode the value of the BOM into a Unicode character, instead of ignoring it. For example (tested with Python 2.3):

>>> codecs.BOM_UTF16.decode( "utf16" ) u'' >>> codecs.BOM_UTF8.decode( "utf8" ) u'\ufeff' 

For UTF-16, Python decoded the BOM into an empty string, but for UTF-8, it decoded it into a character. Why is there a difference? I think the UTF-8 decoder should do the same thing as the UTF-16 decoder and strip out the BOM. However, it doesn't, so you will probably need to detect it and remove it yourself, like this:

import codecs if s.beginswith( codecs.BOM_UTF8 ): 	# The byte string s begins with the BOM: Do something. 	# For example, decode the string as UTF-8 	 if u[0] == unicode( codecs.BOM_UTF8, "utf8" ): 	# The unicode string begins with the BOM: Do something. 	# For example, remove the character.  # Strip the BOM from the beginning of the Unicode string, if it exists u.lstrip( unicode( codecs.BOM_UTF8, "utf8" ) ) 

Writing Python Scripts in Unicode

As you may have noticed from the examples on this page, you can actually write Python scripts in UTF-8. Variables must be in ASCII, but you can include Chinese comments, or Korean strings in your source files. In order for this to work correctly, Python needs to know that your script file is not ASCII. You can do this in one of two ways. First, you can place a UTF-8 byte-order marker at the beginning of your file, if your editor supports it. Secondly, you can place the following special comment in the first or second lines of your script:

# -*- coding: utf-8 -*- 

Any ASCII-compatible encoding is permitted. For details, see the Defining Python Source Code Encodings specification.

Thursday, October 8, 2009

Shell|sed/awk与unix命令等价代码欣赏

sed/awk与unix命令等价代码欣赏 - LinuxSir.Org

标题: sed/awk与unix命令等价代码


转自: www.chinaunix.net 特此感谢
sed与unix 命令等价代码
特此感谢转贴者:admirer
代码:
------------------------------------------------------------------------------- cat | sed ':' cat -s | sed '/./,/^$/!d' tac | sed '1!G;h;$!d' grep | sed '/patt/!d' grep -v | sed '/patt/d' head | sed '10q' head -1 | sed 'q' tail | sed -e ':a' -e '$q;N;11,$D;ba' tail -1 | sed '$!d' tail -f | sed -u '/./!d' cut -c 10 | sed 's/\(.\)\{10\}.*/\1/' cut -d: -f4 | sed 's/\(\([^:]*\):\)\{4\}.*/\2/' tr A-Z a-z | sed 'y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/' tr a-z A-Z | sed 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/' tr -s ' ' | sed 's/ \+/ /g' tr -d '\012' | sed 'H;$!d;g;s/\n//g' wc -l | sed -n '$=' uniq | sed 'N;/^\(.*\)\n\1$/!P;D' rev | sed '/\n/!G;s/\(.\)\(.*\n\)/&\2\1/;//D;s/.//' basename | sed 's,.*/,,' dirname | sed 's,[^/]*$,,' xargs | sed -e ':a' -e '$!N;s/\n/ /;ta' paste -sd: | sed -e ':a' -e '$!N;s/\n/:/;ta' cat -n | sed '=' | sed '$!N;s/\n/ /' grep -n | sed -n '/patt/{=;p;}' | sed '$!N;s/\n/:/' cp orig new | sed 'w new' orig[/code:1:739eb4cef5] -------------------------------------------------------------------------------
awk与unix 命令等价代码
特此感谢作者:飞灰橙
代码:
------------------------------------------------------------------------------- cat |awk '{print}' cat -s |awk '{blank = NF == 0 ? ++blank : 0; if (blank <= 1) print;}' tac |awk '{t[NR] = $0;}END{for (i = NR; i >= 1; i--) print t[i];}' grep patten |awk '/patten/{print}' grep -v patten |awk '! /patten/{print}' head |awk 'NR <= 10 {print}' 24.sh head -1 |awk '{print; exit; }' 24.sh tail |awk '{t[n++ % 10] = $0}END{for (i = 0; i < 10; i++) print t[n++ % 10];}' tail -1 |awk '{t = $0}END{print t}' cut -c 10 |awk '{print substr($0, 10, 1)}' cut -d: -f4 |awk -F: '{if (NF > 1) print $4; else print;}' tr A-Z a-z |awk '{print tolower($0);}' se.sh tr a-z A-Z |awk '{print toupper($0);}' se.sh tr -s ' ' |awk '{print gensub(" +", " ", "g")}' tr -d '\012' |awk '{printf "%s", $0}' wc -l |awk 'END{printf "% 7d\n", NR-1}' uniq |awk '{if (NR == 1 || ln != $0) print; ln = $0;}' rev |awk '{l = ""; for (i = length($0); i > 0; i--) printf "%c", substr($0, i, 1); print "";}' basename |awk -F'/' '{print $NF}' dirname |awk -F'/' '{if (NF <= 1) printf "."; else {OFS="/"; $NF=""; printf "%s", substr($0, 1, length($0) - 1);}}' xargs |awk '{printf "%s ", $0}END{print}' paste -sd: |awk 'NR > 1{printf ":%s", $0}' cat -n |awk '{printf "% 6d %s\n", NR, $0}' grep -n |awk '/ss/{print NR":"$0}' cp orig new |awk '{print > "new"}' orig -------------------------------------------------------------------------------

Shell|Text Conversion/Filter Tools

GNU/Linux Command-Line Tools Summary - Text Conversion/Filter Tools

11.5. Text Conversion/Filter Tools

  • Filters (UNIX System/dos formats)
  •    

    The following filters allow you to change text from Dos-style to UNIX system style and vice-versa, or convert a file to other formats. Also note that many modern text editors can do this for you...

    • Why use filters?
    •    

      Because UNIX systems and Microsoft use two different standards to represent the end-of-line in an ASCII text file.

      This can sometimes causes problems in editors or viewers which aren't familiar with the other operating systems end-of-line style. The following tools allow you to get around this difference.


    • Whats the difference?
    •    

      The difference is very simple, on a Windows text file, a newline is signalled by a carriage return followed by a newline, '\r\n' in ASCII .

      On a UNIX system a newline is simply a newline, '\n' in ASCII .



  • dos2unix
  •    

    This converts Microsoft-style end-of-line characters to UNIX system style end-of-line characters.

    Simply type:


       

    dos2unix file.txt


  • fromdos
  •    

    This does the same as dos2unix (above).

    Simply type:


       

    fromdos file.txt

    fromdos can be obtained from the from/to dos website.


  • unix2dos
  •    

    This converts UNIX system style end-of-line characters to Microsoft-style end-of-line characters.

    Simply type:


       

    unix2dos file.txt


  • todos
  •    

    This does the same as unix2dos (above).

    Simply type:


       

    todos file.txt

    todos can be obtained from the from/to dos website.


  • antiword
  •    

    This filter converts Microsoft word documents into plain ASCII text documents.

    Simply type:


       

    antiword file.doc

    You can get antiword from the antiword homepage.


  • recode
  •    

    Converts text files between various formats including HTML and dozens of different forms of text encodings.

    Use recode -l for a full listing. It can also be used to convert text to and from Windows and UNIX system formats (so you don't get the weird symbols).


           Warning
            

    By default recode overwrites the input file, use '<' to use recode as a filter only (and to not overwrite the file).

    • Examples:
    •    


    UNIX system text to Windows text:


       

    recode ..pc file_name

    Windows text to UNIX system text:


       

    recode ..pc/ file_name

    UNIX system text to Windows text without overwriting the original file (and creating a new output file):


       

    recode ..pc < file_name > recoded_file


  • tr
  •    

    (Windows to UNIX system style conversion only). While tr is not specifically designed to convert files from Windows-format to UNIX system format by doing:


       

    tr -d '\r' < inputFile.txt > outputFile.txt

    The -d switch means to simply delete

Wednesday, October 7, 2009

Sales|Email Spam Act

Linx Software and the Spam Act

Linx Software & the Spam Act

We believe that electronic communication, such as by email, is the most effective and efficient means of communication available to businesses today. Not only is it instant but it helps save the environment. As an IT company we not only use it ourselves but encourage our clients, such as finance brokers and aggregators, to use it too.

However, in so doing, we all need to be aware of the importance of complying with the Spam Act 2003 (the relevant law applicable in Australia).

The following describes our policy and how we comply with the Act.

What is Spam?

It may be worth explaining, first, what is meant by "spam"

"Spam" is not, in fact, defined in the Spam Act. However, the Act refers to "unsolicited commercial electronic messages" and so, in the discussion below, "spam" is taken to have that meaning and is specifically applied to email.

There is much talk about all advertising email being spam - with the implication that all spam is illegal. It is not. The Act permits the use of email for marketing purposes providing certain conditions are closely adhered to.

Clearly there is a considerable difference between receiving dubious emails from unknown entities and one from, say, Officeworks offering you stationery supplies for your business, or an email from a lender, professional association or a lending-industry supplier offering their services. In fact you may well welcome the latter kind of email as a good source of information.  


Our Compliance

Here are the key requirements of the Spam Act and how we comply:

SubjectRequirementCompliance
ConsentThe Act permits the sending of email where there is explicit or inferred consent. Inferred consent is where an email address is conspicuously published (ie. made available for public viewing) where it would be reasonable to expect to receive emails relating to the employment function or role of that person. Linx Software has a mailing list made up of both explicit and inferred consent contacts. While the exact source of each email address is not recorded they are collectively compiled from sources such as enquiries from those in the lending industry (emails, trade shows, functions, etc); business cards displaying an email address, including  from competitions such as Lucky Door Prizes; publicly-available Web sites and past clients or other contacts in our email In Box.

The exception to the above is where any of these sources displays a "no spam" or similar message – these are obviously excluded.

Relevant to Job RoleCommunications must be relevant to job role of recipient. All emails are related to the work-function of the recipient, such as mortgage or finance broking, as known at the time of entry.
No Harvesting Electronic harvesting of email addresses from the Web are not permitted. No harvesting software is used – all addresses are manually collated and managed.
Identification The Act requires clear identification of the sender. All emails from Linx Software fully and accurately identify the sender and generally include address, phone number, Web site and ACN or ABN.
Unsubscribe FunctionThe Act requires senders to have a functional unsubscribe facility and to act on it within five days. All emails from Linx Software have a convenient unsubscribe link and are processed within a few hours of receipt. Tip: Any unsubscribe request should be sent from the same address as it was addressed to. (Sometimes mail is forwarded to a different address and the original address cannot be determined).

Notes

This is not intended as legal comment or advice. You can find out more by reading "Spam Act 2003: A Practical Guide for Business" (PDF 239Kb). (If you need Acrobat PDF Reader you can download it free from Adobe).

If the link to the above guide has changed, try the Australian Government Web site here: www.dbcde.gov.au

If you are outside Australia and feel that our communications do not meet your local regulations please let us know of your concerns. We will make every effort to comply.

Thank you. We hope this has answered any questions you might have about our Spam policy.