用pytonn做c 图像处理库用什么库比较好

编程,数学,设计
作者:Vamei 出处:http://www.cnblogs.com/vamei 欢迎转载,也请保留这段声明。谢谢!
我们已经见过了,但这个包有两个很大的局限性:1) 我们总是让subprocess运行外部的程序,而不是运行一个Python脚本内部编写的函数。2) 进程间只通过管道进行文本交流。以上限制了我们将subprocess包应用到更广泛的多进程任务。(这样的比较实际是不公平的,因为subprocessing本身就是设计成为一个shell,而不是一个多进程管理包)
threading和multiprocessing
(请尽量先阅读)
multiprocessing包是Python中的多进程管理包。与threading.Thread类似,它可以利用multiprocessing.Process对象来创建一个进程。该进程可以运行在Python程序内部编写的函数。该Process对象与Thread对象的用法相同,也有start(), run(), join()的方法。此外multiprocessing包中也有Lock/Event/Semaphore/Condition类 (这些对象可以像多线程那样,通过参数传递给各个进程),用以同步进程,其用法与threading包中的同名类一致。所以,multiprocessing的很大一部份与threading使用同一套API,只不过换到了多进程的情境。
但在使用这些共享API的时候,我们要注意以下几点:
在UNIX平台上,当某个进程终结之后,该进程需要被其父进程调用wait,否则进程成为(Zombie)。所以,有必要对每个Process对象调用join()方法 (实际上等同于wait)。对于多线程来说,由于只有一个进程,所以不存在此必要性。
multiprocessing提供了threading包中没有的IPC(比如Pipe和Queue),效率上更高。应优先考虑Pipe和Queue,避免使用Lock/Event/Semaphore/Condition等同步方式 (因为它们占据的不是用户进程的资源)。
多进程应该避免共享资源。在多线程中,我们可以比较容易地共享资源,比如。在多进程情况下,由于每个进程有自己独立的内存空间,以上方法并不合适。此时我们可以通过和Manager的方法来共享资源。但这样做提高了程序的复杂度,并因为同步的需要而降低了程序的效率。
Process.PID中保存有PID,如果进程还没有start(),则PID为None。
我们可以从下面的程序中看到Thread对象和Process对象在使用上的相似性与结果上的不同。各个线程和进程都做一件事:打印PID。但问题是,所有的任务在打印的时候都会向同一个标准输出(stdout)输出。这样输出的字符会混合在一起,无法阅读。使用Lock同步,在一个任务输出完成之后,再允许另一个任务输出,可以避免多个任务同时向终端输出。
# Similarity and difference of multi thread vs. multi process
# Written by Vamei
import threading
import multiprocessing
# worker function
def worker(sign, lock):
lock.acquire()
print(sign, os.getpid())
lock.release()
print('Main:',os.getpid())
# Multi-thread
record = []
= threading.Lock()
for i in range(5):
thread = threading.Thread(target=worker,args=('thread',lock))
thread.start()
record.append(thread)
for thread in record:
thread.join()
# Multi-process
record = []
lock = multiprocessing.Lock()
for i in range(5):
process = multiprocessing.Process(target=worker,args=('process',lock))
process.start()
record.append(process)
for process in record:
process.join()
所有Thread的PID都与主程序相同,而每个Process都有一个不同的PID。
(练习: 使用mutiprocessing包将中的多线程程序更改为多进程程序)
Pipe和Queue
正如我们在中介绍的管道PIPE和消息队列message queue,multiprocessing包中有Pipe类和Queue类来分别支持这两种IPC机制。Pipe和Queue可以用来传送常见的对象。
1) Pipe可以是单向(half-duplex),也可以是双向(duplex)。我们通过mutiprocessing.Pipe(duplex=False)创建单向管道 (默认为双向)。一个进程从PIPE一端输入对象,然后被PIPE另一端的进程接收,单向管道只允许管道一端的进程输入,而双向管道则允许从两端输入。
下面的程序展示了Pipe的使用:
# Multiprocessing with Pipe
# Written by Vamei
import multiprocessing as mul
def proc1(pipe):
pipe.send('hello')
print('proc1 rec:',pipe.recv())
def proc2(pipe):
print('proc2 rec:',pipe.recv())
pipe.send('hello, too')
# Build a pipe
pipe = mul.Pipe()
# Pass an end of the pipe to process 1
= mul.Process(target=proc1, args=(pipe[0],))
# Pass the other end of the pipe to process 2
= mul.Process(target=proc2, args=(pipe[1],))
p1.start()
p2.start()
这里的Pipe是双向的。
Pipe对象建立的时候,返回一个含有两个元素的表,每个元素代表Pipe的一端(Connection对象)。我们对Pipe的某一端调用send()方法来传送对象,在另一端使用recv()来接收。
2) Queue与Pipe相类似,都是先进先出的结构。但Queue允许多个进程放入,多个进程从队列取出对象。Queue使用mutiprocessing.Queue(maxsize)创建,maxsize表示队列中可以存放对象的最大数量。
下面的程序展示了Queue的使用:
# Written by Vamei
import multiprocessing
import time
#==================
# input worker
def inputQ(queue):
info = str(os.getpid()) + '(put):' + str(time.time())
queue.put(info)
# output worker
def outputQ(queue,lock):
info = queue.get()
lock.acquire()
print (str(os.getpid()) + '(get):' + info)
lock.release()
#===================
record1 = []
# store input processes
record2 = []
# store output processes
= multiprocessing.Lock()
# To prevent messy print
queue = multiprocessing.Queue(3)
# input processes
for i in range(10):
process = multiprocessing.Process(target=inputQ,args=(queue,))
process.start()
record1.append(process)
# output processes
for i in range(10):
process = multiprocessing.Process(target=outputQ,args=(queue,lock))
process.start()
record2.append(process)
for p in record1:
queue.close()
# No more object will come, close the queue
for p in record2:
&一些进程使用put()在Queue中放入字符串,这个字符串中包含PID和时间。另一些进程从Queue中取出,并打印自己的PID以及get()的字符串。
Process, Lock, Event, Semaphore, Condition
Pipe, Queue
阅读(...) 评论()Posts - 554,
Articles - 272,
Comments - 1138
号角声在远方又吹响/何时回故乡/又怎么回故乡/曾经的你现在又怎样/是谁的新娘/为谁做衣裳/
火跳动着绝望/谁在低声吟唱/说遗忘者的哀伤 /用战斗证明希望
17:41 by Rollen Holt, ... 阅读,
坚持每天学一点,每天积累一点点,作为自己每天的业余收获,这个文章是我在吃饭的期间写的,利用自己零散的时间学了一下python操作MYSQL,所以整理一下。
我采用的是MySQLdb操作的MYSQL数据库。先来一个简单的例子吧:
import MySQLdb
conn=MySQLdb.connect(host='localhost',user='root',passwd='root',db='test',port=3306)
cur=conn.cursor()
cur.execute('select * from user')
cur.close()
conn.close()
except MySQLdb.Error,e:
print "Mysql Error %d: %s" % (e.args[0], e.args[1])
  请注意修改你的数据库,主机名,用户名,密码。
下面来大致演示一下插入数据,批量插入数据,更新数据的例子吧:
import MySQLdb
conn=MySQLdb.connect(host='localhost',user='root',passwd='root',port=3306)
cur=conn.cursor()
cur.execute('create database if not exists python')
conn.select_db('python')
cur.execute('create table test(id int,info varchar(20))')
value=[1,'hi rollen']
cur.execute('insert into test values(%s,%s)',value)
for i in range(20):
values.append((i,'hi rollen'+str(i)))
cur.executemany('insert into test values(%s,%s)',values)
cur.execute('update test set info="I am rollen" where id=3')
conn.commit()
cur.close()
conn.close()
except MySQLdb.Error,e:
print "Mysql Error %d: %s" % (e.args[0], e.args[1])
  请注意一定要有conn.commit()这句来提交事务,要不然不能真正的插入数据。
运行之后我的MySQL数据库的结果就不上图了。
import MySQLdb
conn=MySQLdb.connect(host='localhost',user='root',passwd='root',port=3306)
cur=conn.cursor()
conn.select_db('python')
count=cur.execute('select * from test')
print 'there has %s rows record' % count
result=cur.fetchone()
print result
print 'ID: %s info %s' % result
results=cur.fetchmany(5)
for r in results:
print '=='*10
cur.scroll(0,mode='absolute')
results=cur.fetchall()
for r in results:
print r[1]
conn.commit()
cur.close()
conn.close()
except MySQLdb.Error,e:
print "Mysql Error %d: %s" % (e.args[0], e.args[1])
  运行结果就不贴了,太长了。
查询后中文会正确显示,但在数据库中却是乱码的。经过我从网上查找,发现用一个属性有可搞定:
在Python代码&
conn = MySQLdb.Connect(host='localhost', user='root', passwd='root', db='python')&中加一个属性:&改为:conn = MySQLdb.Connect(host='localhost', user='root', passwd='root', db='python',charset='utf8')&charset是要跟你数据库的编码一样,如果是数据库是gb2312 ,则写charset='gb2312'。
下面贴一下常用的函数:
然后,这个连接对象也提供了对事务操作的支持,标准的方法commit() 提交rollback() 回滚
cursor用来执行命令的方法:callproc(self, procname, args):用来执行存储过程,接收的参数为存储过程名和参数列表,返回值为受影响的行数execute(self, query, args):执行单条sql语句,接收的参数为sql语句本身和使用的参数列表,返回值为受影响的行数executemany(self, query, args):执行单挑sql语句,但是重复执行参数列表里的参数,返回值为受影响的行数nextset(self):移动到下一个结果集cursor用来接收返回值的方法:fetchall(self):接收全部的返回结果行.fetchmany(self, size=None):接收size条返回结果行.如果size的值大于返回的结果行的数量,则会返回cursor.arraysize条数据.fetchone(self):返回一条结果行.scroll(self, value, mode='relative'):移动指针到某一行.如果mode='relative',则表示从当前所在行移动value条,如果 mode='absolute',则表示从结果集的第一行移动value条.
参考资料:Document tree:
Site Search:15.2. io — Core tools for working with streams — Python 2.7.14 documentation
— Core tools for working with streams
New in version 2.6.
module provides the Python interfaces to stream handling.
Under Python 2.x, this is proposed as an alternative to the built-in
object, but in Python 3.x it is the default interface to
access files and streams.
Since this module has been designed primarily for Python 3.x, you have to
be aware that all uses of “bytes” in this document refer to the
type (of which bytes is an alias), and all uses
of “text” refer to the
Furthermore, those two
types are not interchangeable in the
At the top of the I/O hierarchy is the abstract base class .
defines the basic interface to a stream.
Note, however, that there is no
separation between reading an implementations are allowed
to raise an
if they do not support a given operation.
which deals simply with the
reading and writing of raw bytes to a stream.
subclasses
to provide an interface to files in the machine’s
file system.
deals with buffering on a raw byte stream
Its subclasses, ,
buffer streams that are
readable, writable, and both readable and writable.
provides a buffered interface to random access
is a simple stream of in-memory bytes.
subclass, , deals with
streams whose bytes represent text, and handles encoding and decoding
from and to
, which extends
it, is a buffered text interface to a buffered raw stream
(). Finally,
is an in-memory
stream for unicode text.
Argument names are not part of the specification, and only the arguments of
are intended to be used as keyword arguments.
15.2.1. Module Interface
io.DEFAULT_BUFFER_SIZE
An int containing the default buffer size used by the module’s buffered I/O
uses the file’s blksize (as obtained by
) if possible.
io.open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True)
Open file and return a corresponding stream.
If the file cannot be opened,
is raised.
file is either a string giving the pathname (absolute or
relative to the current working directory) of the file to be opened or
an integer file descriptor of the file to be wrapped.
(If a file descriptor
is given, it is closed when the returned I/O object is closed, unless
closefd is set to False.)
mode is an optional string that specifies the mode in which the file is
It defaults to 'r' which means open for reading in text mode.
Other common values are 'w' for writing (truncating the file if it
already exists), and 'a' for appending (which on some Unix systems,
means that all writes append to the end of the file regardless of the
current seek position).
In text mode, if encoding is not specified the
encoding used is platform dependent. (For reading and writing raw bytes use
binary mode and leave encoding unspecified.)
The available modes are:
open for reading (default)
open for writing, truncating the file first
open for writing, appending to the end of the file if it exists
binary mode
text mode (default)
open a disk file for updating (reading and writing)
universal newlines mode (for bac should
not be used in new code)
The default mode is 'rt' (open for reading text).
For binary random
access, the mode 'w+b' opens and truncates the file to 0 bytes, while
'r+b' opens the file without truncation.
Python distinguishes between files opened in binary and text modes, even when
the underlying operating system doesn’t.
Files opened in binary mode
(including 'b' in the mode argument) return contents as bytes
objects without any decoding.
In text mode (the default, or when 't' is
included in the mode argument), the contents of the file are returned as
strings, the bytes having been first decoded using a
platform-dependent encoding or using the specified encoding if given.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to select
line buffering (only usable in text mode), and an integer & 1 to indicate
the size of a fixed-size chunk buffer.
When no buffering argument is
given, the default buffering policy works as follows:
Binary files are buffered in fixed- the size of the buffer
is chosen using a heuristic trying to determine the underlying device’s
“block size” and falling back on .
On many systems, the buffer will typically be 4096 or 8192 bytes long.
“Interactive” text files (files for which isatty() returns True)
use line buffering.
Other text files use the policy described above
for binary files.
encoding is the name of the encoding used to decode or encode the file.
This should only be used in text mode.
The default encoding is platform
dependent (whatever
returns), but any
encoding supported by Python can be used.
module for
the list of supported encodings.
errors is an optional string that specifies how encoding and decoding
errors are to be handled—this cannot be used in binary mode.
'strict' to raise a
exception if there is an encoding
error (the default of None has the same effect), or pass 'ignore' to
ignore errors.
(Note that ignoring encoding errors can lead to data loss.)
'replace' causes a replacement marker (such as '?') to be inserted
where there is malformed data.
When writing, 'xmlcharrefreplace'
(replace with the appropriate XML character reference) or
'backslashreplace' (replace with backslashed escape sequences) can be
Any other error handling name that has been registered with
is also valid.
newline controls how
works (it only applies to
text mode).
It can be None, '', '\n', '\r', and '\r\n'.
It works as follows:
On input, if newline is None, universal newlines mode is enabled.
Lines in the input can end in '\n', '\r', or '\r\n', and these
are translated into '\n' before being returned to the caller.
'', universal newlines mode is enabled, but line endings are returned to
the caller untranslated.
If it has any of the other legal values, input
lines are only terminated by the given string, and the line ending is
returned to the caller untranslated.
On output, if newline is None, any '\n' characters written are
translated to the system default line separator, .
newline is '', no translation takes place.
If newline is any of
the other legal values, any '\n' characters written are translated to
the given string.
If closefd is False and a file descriptor rather than a filename was
given, the underlying file descriptor will be kept open when the file is
If a filename is given closefd has no effect and must be True
(the default).
The type of file object returned by the
function depends on the
is used to open a file in a text mode ('w',
'r', 'wt', 'rt', etc.), it returns a subclass of
(specifically ).
When used to open
a file in a binary mode with buffering, the returned class is a subclass of
The exact class varies: in read binary mode, it
in write binary and append binary modes,
it returns a , and in read/write mode, it returns a
When buffering is disabled, the raw stream, a
subclass of , , is returned.
It is also possible to use an
or bytes string
as a file for both reading and writing.
can be used like a file opened in text mode,
and for bytes a
can be used like a
file opened in a binary mode.
exception io.BlockingIOError
Error raised when blocking would occur on a non-blocking stream.
It inherits
In addition to those of ,
attribute:
characters_written
An integer containing the number of characters written to the stream
before it blocked.
exception io.UnsupportedOperation
An exception inheriting
that is raised
when an unsupported operation is called on a stream.
15.2.2. I/O Base Classes
class io.IOBase
The abstract base class for all I/O classes, acting on streams of bytes.
There is no public constructor.
This class provides empty abstract implementations for many methods
that derived classes can
the default
implementations represent a file that cannot be read, written or
Even though
does not declare read(), readinto(),
or write() because their signatures will vary, implementations and
clients should consider those methods part of the interface.
implementations may raise an
when operations they do not
support are called.
The basic type used for binary data read from or written to a file is
bytes (also known as ).
Method arguments may
of arrays of bytes.
In some cases, such as , a writable object
is required.
Text I/O classes work with
Changed in version 2.7: Implementations should support
arguments.
Note that calling any method (even inquiries) on a closed stream is
undefined.
Implementations may raise
in this case.
IOBase (and its subclasses) support the iterator protocol, meaning that an
object can be iterated over yielding the lines in a stream.
Lines are defined slightly differently depending on whether the stream is
a binary stream (yielding bytes), or a text stream (yielding
IOBase is also a context manager and therefore supports the
statement.
In this example, file is closed after the
statement’s suite is finished—even if an exception occurs:
with io.open('spam.txt', 'w') as file:
file.write(u'Spam and eggs!')
provides these data attributes and methods:
Flush and close this stream. This method has no effect if the file is
already closed. Once the file is closed, any operation on the file
(e.g. reading or writing) will raise a .
As a convenience, it is allowed to call this m
only the first call, however, will have an effect.
True if the stream is closed.
Return the underlying file descriptor (an integer) of the stream if it
is raised if the IO object does not use a file
descriptor.
Flush the write buffers of the stream if applicable.
This does nothing
for read-only and non-blocking streams.
Return True if the stream is interactive (i.e., connected to
a terminal/tty device).
readable()
Return True if the stream can be read from.
If False, read()
will raise .
readline(limit=-1)
Read and return one line from the stream.
If limit is specified, at
most limit bytes will be read.
The line terminator is always b'\n' for text files,
the newline argument to
can be used to select the line
terminator(s) recognized.
readlines(hint=-1)
Read and return a list of lines from the stream.
hint can be specified
to control the number of lines read: no more lines will be read if the
total size (in bytes/characters) of all lines so far exceeds hint.
Note that it’s already possible to iterate on file objects using for
line in file: ... without calling file.readlines().
seek(offset[, whence])
Change the stream position to the given byte offset.
interpreted relative to the position indicated by whence.
The default
value for whence is SEEK_SET.
Values for whence are:
SEEK_SET or 0 – start of the stream (the default);
offset should be zero or positive
SEEK_CUR or 1 – cur offset may
be negative
SEEK_END or 2 – offset is usually
Return the new absolute position.
New in version 2.7: The SEEK_* constants
seekable()
Return True if the stream supports random access.
will raise .
Return the current stream position.
truncate(size=None)
Resize the stream to the given size in bytes (or the current position
if size is not specified).
The current stream position isn’t changed.
This resizing can extend or reduce the current file size.
In case of
extension, the contents of the new file area depend on the platform
(on most systems, additional bytes are zero-filled, on Windows they’re
undetermined).
The new file size is returned.
writable()
Return True if the stream supports writing.
write() and
will raise .
writelines(lines)
Write a list of lines to the stream.
Line separators are not added, so it
is usual for each of the lines provided to have a line separator at the
Prepare for object destruction.
provides a default
implementation of this method that calls the instance’s
class io.RawIOBase
Base class for raw binary I/O.
It inherits .
There is no
public constructor.
Raw binary I/O typically provides low-level access to an underlying OS
device or API, and does not try to encapsulate it in high-level primitives
(this is left to Buffered I/O and Text I/O, described later in this page).
In addition to the attributes and methods from ,
RawIOBase provides the following methods:
read(n=-1)
Read up to n bytes from the object and return them.
As a convenience,
if n is unspecified or -1,
is called.
Otherwise,
only one system call is ever made.
Fewer than n bytes may be
returned if the operating system call returns fewer than n bytes.
If 0 bytes are returned, and n was not 0, this indicates end of file.
If the object is in non-blocking mode and no bytes are available,
None is returned.
Read and return all the bytes from the stream until EOF, using multiple
calls to the stream if necessary.
readinto(b)
Read up to len(b) bytes into b, and return the number
of bytes read.
The object b should be a pre-allocated, writable
array of bytes, either
If the object is in non-blocking mode and no
bytes are available, None is returned.
Write b to the underlying raw stream, and return the
number of bytes written.
The object b should be an array
of bytes, either bytes, , or
The return value can be less than
len(b), depending on specifics of the underlying raw stream, and
especially if it is in non-blocking mode.
None is returned if the
raw stream is set not to block and no single byte could be readily
written to it.
The caller may release or mutate b after
this method returns, so the implementation should only access b
during the method call.
class io.BufferedIOBase
Base class for binary streams that support some kind of buffering.
It inherits . There is no public constructor.
The main difference with
is that methods ,
will try (respectively) to read as much
input as requested or to consume all given output, at the expense of
making perhaps more than one system call.
In addition, those methods can raise
underlying raw stream is in non-blocking mode and cannot take or give
unlike their
counterparts, they will
never return None.
Besides, the
method does not have a default
implementation that defers to .
implementation should not inherit from a
implementation, but wrap one, like
provides or overrides these methods and attribute in
addition to those from :
The underlying raw stream (a
instance) that
deals with.
This is not part of the
API and may not exist on some implementations.
Separate the underlying raw stream from the buffer and return it.
After the raw stream has been detached, the buffer is in an unusable
Some buffers, like , do not have the concept of a single
raw stream to return from this method.
They raise
New in version 2.7.
read(n=-1)
Read and return up to n bytes.
If the argument is omitted, None, or
negative, data is read and returned until EOF is reached.
An empty bytes
object is returned if the stream is already at EOF.
If the argument is positive, and the underlying raw stream is not
interactive, multiple raw reads may be issued to satisfy the byte count
(unless EOF is reached first).
But for interactive raw streams, at most
one raw read will be issued, and a short result does not imply that EOF is
is raised if the underlying raw stream is in
non blocking-mode, and has no data available at the moment.
read1(n=-1)
Read and return up to n bytes, with at most one call to the underlying
raw stream’s
This can be useful if you
are implementing your own buffering on top of a
readinto(b)
Read up to len(b) bytes into b, and return the number of bytes read.
The object b should be a pre-allocated, writable array of bytes,
Like , multiple reads may be issued to the underlying raw
stream, unless the latter is ‘interactive’.
is raised if the underlying raw stream is in
non blocking-mode, and has no data available at the moment.
Write b, and return the number of bytes written
(always equal to len(b), since if the write fails
will be raised).
The object b should be
an array of bytes, either bytes, ,
Depending on the actual
implementation, these bytes may be readily written to the underlying
stream, or held in a buffer for performance and latency reasons.
When in non-blocking mode, a
is raised if the
data needed to be written to the raw stream but it couldn’t accept
all the data without blocking.
The caller may release or mutate b after this method returns,
so the implementation should only access b during the method call.
15.2.3. Raw File I/O
class io.FileIO(name, mode='r', closefd=True)
represents an OS-level file containing bytes data.
It implements the
interface (and therefore the
interface, too).
The name can be one of two things:
a string representing the path to the file
an integer representing the number of an existing OS-level file descriptor
to which the resulting
object will give access.
The mode can be 'r', 'w' or 'a' for reading (default), writing,
or appending.
The file will be created if it doesn’t exist when opened for
it will be truncated when opened for writing.
'+' to the mode to allow simultaneous reading and writing.
The read() (when called with a positive argument), readinto()
and write() methods on this class will only make one system call.
In addition to the attributes and methods from
provides the following data
attributes and methods:
The mode as given in the constructor.
The file name.
This is the file descriptor of the file when no name is
given in the constructor.
15.2.4. Buffered Streams
Buffered I/O streams provide a higher-level interface to an I/O device
than raw I/O does.
class io.BytesIO([initial_bytes])
A stream implementation using an in-memory bytes buffer.
It inherits
The optional argument initial_bytes is a bytes object that
contains initial data.
provides or overrides these methods in addition to those
getvalue()
Return bytes containing the entire contents of the buffer.
In , this is the same as read().
class io.BufferedReader(raw, buffer_size=DEFAULT_BUFFER_SIZE)
A buffer providing higher-level access to a readable, sequential
It inherits .
When reading data from this object, a larger amount of data may be
requested from the underlying raw stream, and kept in an internal buffer.
The buffered data can then be returned directly on subsequent reads.
The constructor creates a
for the given readable
raw stream and buffer_size.
If buffer_size is omitted,
provides or overrides these methods in addition to
those from
Return bytes from the stream without advancing the position.
At most one
single read on the raw stream is done to satisfy the call. The number of
bytes returned may be less or more than requested.
Read and return n bytes, or if n is not given or negative, until EOF
or if the read call would block in non-blocking mode.
Read and return up to n bytes with only one call on the raw stream.
at least one byte is buffered, only buffered bytes are returned.
Otherwise, one raw stream read call is made.
class io.BufferedWriter(raw, buffer_size=DEFAULT_BUFFER_SIZE)
A buffer providing higher-level access to a writeable, sequential
It inherits .
When writing to this object, data is normally held into an internal
The buffer will be written out to the underlying
object under various conditions, including:
when the buffer gets too small
when a seek() is requested (for
object is closed or destroyed.
The constructor creates a
for the given writeable
raw stream.
If the buffer_size is not given, it defaults to
A third argument, max_buffer_size, is supported, but unused and deprecated.
provides or overrides these methods in addition to
those from
Force bytes held in the buffer into the raw stream.
should be raised if the raw stream blocks.
Write b, and return the number of bytes written.
The object b should be an array of bytes, either
bytes, , or .
When in non-blocking mode, a
if the buffer needs to be written out but the raw stream blocks.
class io.BufferedRandom(raw, buffer_size=DEFAULT_BUFFER_SIZE)
A buffered interface to random access streams.
It inherits
and , and further supports
seek() and tell() functionality.
The constructor creates a reader and writer for a seekable raw stream, given
in the first argument.
If the buffer_size is omitted it defaults to
A third argument, max_buffer_size, is supported, but unused and deprecated.
is capable of anything
class io.BufferedRWPair(reader, writer, buffer_size=DEFAULT_BUFFER_SIZE)
A buffered I/O object combining two unidirectional
objects – one readable, the other writeable – into a single bidirectional
It inherits .
reader and writer are
objects that are readable and
writeable respectively.
If the buffer_size is omitted it defaults to
A fourth argument, max_buffer_size, is supported, but unused and
deprecated.
implements all of ’s methods
except for , which raises
does not attempt to synchronize accesses to
its underlying raw streams.
You should not pass it the same object
15.2.5. Text I/O
class io.TextIOBase
Base class for text streams.
This class provides a unicode character
and line based interface to stream I/O.
There is no readinto()
method because Python’s
strings are immutable.
It inherits .
There is no public constructor.
provides or overrides these data attributes and
methods in addition to those from :
The name of the encoding used to decode the stream’s bytes into
strings, and to encode strings into bytes.
The error setting of the decoder or encoder.
A string, a tuple of strings, or None, indicating the newlines
translated so far.
Depending on the implementation and the initial
constructor flags, this may not be available.
The underlying binary buffer (a
instance) that
deals with.
This is not part of the
API and may not exist on some implementations.
Separate the underlying binary buffer from the
return it.
After the underlying buffer has been detached, the
in an unusable state.
implementations, like , may not
have the concept of an underlying buffer and calling this method will
New in version 2.7.
Read and return at most n characters from the stream as a single
If n is negative or None, reads until EOF.
readline(limit=-1)
Read until newline or EOF and return a single unicode.
stream is already at EOF, an empty string is returned.
If limit is specified, at most limit characters will be read.
seek(offset[, whence])
Change the stream position to the given offset.
Behaviour depends on
the whence parameter.
The default value for whence is
SEEK_SET or 0: seek from the start of the stream
(the default); offset must either be a number returned by
, or zero.
Any other offset value
produces undefined behaviour.
SEEK_CUR or 1: “seek” to
offset must be zero, which is a no-operation (all other values
are unsupported).
SEEK_END or 2: seek to t
offset must be zero (all other values are unsupported).
Return the new absolute position as an opaque number.
New in version 2.7: The SEEK_* constants.
Return the current stream position as an opaque number.
The number
does not usually represent a number of bytes in the underlying
binary storage.
string s to the stream and return the
number of characters written.
class io.TextIOWrapper(buffer, encoding=None, errors=None, newline=None, line_buffering=False)
A buffered text stream over a
binary stream.
It inherits .
encoding gives the name of the encoding that the stream will be decoded or
encoded with.
It defaults to .
errors is an optional string that specifies how encoding and decoding
errors are to be handled.
Pass 'strict' to raise a
exception if there is an encoding error (the default of None has the same
effect), or pass 'ignore' to ignore errors.
(Note that ignoring encoding
errors can lead to data loss.)
'replace' causes a replacement marker
(such as '?') to be inserted where there is malformed data.
writing, 'xmlcharrefreplace' (replace with the appropriate XML character
reference) or 'backslashreplace' (replace with backslashed escape
sequences) can be used.
Any other error handling name that has been
registered with
is also valid.
newline controls how line endings are handled.
It can be None,
'', '\n', '\r', and '\r\n'.
It works as follows:
On input, if newline is None,
Lines in the input can end in '\n', '\r', or '\r\n',
and these are translated into '\n' before being returned to the
If it is '', universal newlines mode is enabled, but line
endings are returned to the caller untranslated.
If it has any of the
other legal values, input lines are only terminated by the given string,
and the line ending is returned to the caller untranslated.
On output, if newline is None, any '\n' characters written are
translated to the system default line separator, .
newline is '', no translation takes place.
If newline is any of
the other legal values, any '\n' characters written are translated to
the given string.
If line_buffering is True, flush() is implied when a call to
write contains a newline character.
provides one attribute in addition to those of
and its parents:
line_buffering
Whether line buffering is enabled.
class io.StringIO(initial_value=u'', newline=u'\n')
An in-memory stream for unicode text.
It inherits .
The initial value of the buffer can be set by providing initial_value.
If newline translation is enabled, newlines will be encoded as if by
The stream is positioned at the start of
the buffer.
The newline argument works like that of .
The default is to consider only \n characters as ends of lines and
to do no newline translation.
If newline is set to None,
newlines are written as \n on all platforms, but universal
newline decoding is still performed when reading.
provides this method in addition to those from
and its parents:
getvalue()
Return a unicode containing the entire contents of the buffer at any
time before the
object’s close() method is
Newlines are decoded as if by ,
although the stream position is not changed.
Example usage:
output = io.StringIO()
output.write(u'First line.\n')
output.write(u'Second line.\n')
# Retrieve file contents -- this will be
# u'First line.\nSecond line.\n'
contents = output.getvalue()
# Close object and discard memory buffer --
# .getvalue() will now raise an exception.
output.close()
class io.IncrementalNewlineDecoder
A helper codec that decodes newlines for
It inherits .
15.2.6. Advanced topics
Here we will discuss several advanced topics pertaining to the concrete
I/O implementations described above.
15.2.6.1. Performance
15.2.6.1.1. Binary I/O
By reading and writing only large chunks of data even when the user asks
for a single byte, buffered I/O is designed to hide any inefficiency in
calling and executing the operating system’s unbuffered I/O routines.
gain will vary very much depending on the OS and the kind of I/O which is
performed (for example, on some contemporary OSes such as Linux, unbuffered
disk I/O can be as fast as buffered I/O).
The bottom line, however, is
that buffered I/O will offer you predictable performance regardless of the
platform and the backing device.
Therefore, it is most always preferable to
use buffered I/O rather than unbuffered I/O.
15.2.6.1.2. Text I/O
Text I/O over a binary storage (such as a file) is significantly slower than
binary I/O over the same storage, because it implies conversions from
unicode to binary data using a character codec.
This can become noticeable
if you handle huge amounts of text data (for example very large log files).
Also, TextIOWrapper.tell() and TextIOWrapper.seek() are both
quite slow due to the reconstruction algorithm used.
, however, is a native in-memory unicode container and will
exhibit similar speed to .
15.2.6.2. Multi-threading
objects are thread-safe to the extent that the operating
system calls (such as read(2) under Unix) they are wrapping are thread-safe
Binary buffered objects (instances of ,
protect their internal str it is therefore safe to call
them from multiple threads at once.
objects are not thread-safe.
15.2.6.3. Reentrancy
Binary buffered objects (instances of ,
are not reentrant.
While reentrant calls will not happen in normal situations,
they can arise if you are doing I/O in a
attempted to enter a buffered object again while already being accessed
from the same thread, then a
is raised.
The above implicitly extends to text files, since the
function will wrap a buffered object inside a .
includes standard streams and therefore affects the built-in function

我要回帖

更多关于 图像处理库 二维码 的文章

 

随机推荐