What is this "def" I heard of?

What is this "def" I heard of?
"def" is a replacement for a type name. In variable definitions it is used to indicate that you don't care
about the type. In variable definitions it is mandatory to either provide a type name explicitly or to use
"def" in replacement. This is needed to the make variable definitions detectable for the Groovy parser.
These definitions may occur for local variables in a script or for local variables and properties/fields in a
class.
Rule of thumb
You can think of "def" as an alias of "Object" and you will understand it in an instant.
Future Groovy may give "def" an additional meaning in terms of static and dynamic typing. But this is
post Groovy 1.0.
"def" can also replace "void" as the return type in a method definiton.

  1. defdynamic=1
  2. dynamic="IamaStringstoredinavariableofdynamictype"
  3. inttyped=2
  4. typed="IamaStringstoredinavariableoftypeint??"//throwsClassCastException

The assignment of a string, to a variable of type int will fail. A variable typed with "def" allows this.

from nltk.chat.util import Chat # 修改反射字典:使用标准英语替换网络用语 reflections = { "am": "are", # 原"r"改为"are" "was": "were", "i": "you", # 原"u"改为"you" "i'd": "you'd", "i've": "you've", "ive": "you've", "i'll": "you'll", "my": "your", # 原"ur"改为"your" "are": "am", "you're": "I'm", "you've": "I've", "you'll": "I'll", "your": "my", "yours": "mine", "you": "me", "me": "you", # 删除非标准缩写:ur, urs, u } # 重写对话模板:移除网络用语和夸张表达 pairs = ( ( r"I\'m (.*)", ( "You're %1? That's interesting. Tell me more.", # 移除"kekekeke"和表情 "You're %1? That sounds nice.", # 简化表达 ), ), ( r"(.*) don\'t you (.*)", ( "Why do you think I can't %2?", # 改为完整疑问句 "What makes you say I can't %2?", "I probably could if I wanted to, don't you think?" ), ), (r"ye[as] [iI] (.*)", ("You %1? How did that happen?", "Why did you %1?", "I %1 too.")), ( r"do (you|u) (.*)\??", ("Do I %2? Sometimes, depending on the situation.", "I'm not sure. Do you %2?"), ), ( r"(.*)\?", ( "I'm curious why you asked that.", # 移除负面词汇"boring" "That's an interesting question.", ), ), ( r"(cos|because) (.*)", ("I see, that makes sense.", "That's a good reason.", "I understand."), ), ( r"why can\'t [iI] (.*)", ( "I'm not sure why you can't %1.", "Have you tried different approaches?", "When I can't do something, I usually practice more." ), ), ( r"I can\'t (.*)", ( "Why do you think you can't %1?", "That's okay, we all have limitations.", "Maybe with practice you could do it." ), ), ( r"(.*) (like|love|watch) anime", ( "I enjoy anime too. Do you have a favorite series?", # 移除夸张表达 "Anime is an interesting art form.", "What do you like most about anime?" ), ), ( r"I (like|love|watch|play) (.*)", ("That's nice. %2 can be enjoyable.", "Do you have other interests too?"), ), ( r"anime sucks|(.*) (hate|detest) anime", ( "Everyone has different tastes in entertainment.", "What kind of shows do you prefer instead?", ), ), ( r"(are|r) (you|u) (.*)", ("Why do you ask if I'm %3?", "What makes you wonder about that?"), ), ( r"what (.*)", ("I'm not sure about that.", "Could you tell me more about your question?"), ), (r"how (.*)", ("I'd prefer to talk about something else.",)), (r"(hi|hello|hey) (.*)", ("Hello! How are you today?",)), ( r"quit", ( "Goodbye! It was nice talking with you.", # 移除戏剧化表达 "Take care. Hope we can chat again sometime.", ), ), ( r"(.*)", ( "Could you tell me more about that?", "What are your thoughts on this topic?", "Do you enjoy any particular hobbies?", "I'm interested in your perspective." ), ), ) iesha_chatbot = Chat(pairs, reflections) def iesha_chat(): print("Iesha the Normal Chatbot\n---------") # 修改标题 print("Talk to the program by typing in plain English.") print('Enter "quit" when done.') print("=" * 72) print("Hello! I'm Iesha. What's your name?") # 标准化问候语 iesha_chatbot.converse() # 其余代码保持不变... 把她的对话模板变得更复杂
09-28
Kconfig如下 开启哪些可以配置tcp_delack_min # # IP configuration # config IP_MULTICAST bool "IP: multicasting" help This is code for addressing several networked computers at once, enlarging your kernel by about 2 KB. You need multicasting if you intend to participate in the MBONE, a high bandwidth network on top of the Internet which carries audio and video broadcasts. More information about the MBONE is on the WWW at <http://www.savetz.com/mbone/>. For most people, it's safe to say N. config IP_ADVANCED_ROUTER bool "IP: advanced router" ---help--- If you intend to run your Linux box mostly as a router, i.e. as a computer that forwards and redistributes network packets, say Y; you will then be presented with several options that allow more precise control about the routing process. The answer to this question won't directly affect the kernel: answering N will just cause the configurator to skip all the questions about advanced routing. Note that your box can only act as a router if you enable IP forwarding in your kernel; you can do that by saying Y to "/proc file system support" and "Sysctl support" below and executing the line echo "1" > /proc/sys/net/ipv4/ip_forward at boot time after the /proc file system has been mounted. If you turn on IP forwarding, you should consider the rp_filter, which automatically rejects incoming packets if the routing table entry for their source address doesn't match the network interface they're arriving on. This has security advantages because it prevents the so-called IP spoofing, however it can pose problems if you use asymmetric routing (packets from you to a host take a different path than packets from that host to you) or if you operate a non-routing host which has several IP addresses on different interfaces. To turn rp_filter on use: echo 1 > /proc/sys/net/ipv4/conf/<device>/rp_filter or echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter Note that some distributions enable it in startup scripts. For details about rp_filter strict and loose mode read <file:Documentation/networking/ip-sysctl.txt>. If unsure, say N here. config IP_FIB_TRIE_STATS bool "FIB TRIE statistics" depends on IP_ADVANCED_ROUTER ---help--- Keep track of statistics on structure of FIB TRIE table. Useful for testing and measuring TRIE performance. config IP_MULTIPLE_TABLES bool "IP: policy routing" depends on IP_ADVANCED_ROUTER select FIB_RULES ---help--- Normally, a router decides what to do with a received packet based solely on the packet's final destination address. If you say Y here, the Linux router will also be able to take the packet's source address into account. Furthermore, the TOS (Type-Of-Service) field of the packet can be used for routing decisions as well. If you need more information, see the Linux Advanced Routing and Traffic Control documentation at <http://lartc.org/howto/lartc.rpdb.html> If unsure, say N. config IP_ROUTE_MULTIPATH bool "IP: equal cost multipath" depends on IP_ADVANCED_ROUTER help Normally, the routing tables specify a single action to be taken in a deterministic manner for a given packet. If you say Y here however, it becomes possible to attach several actions to a packet pattern, in effect specifying several alternative paths to travel for those packets. The router considers all these paths to be of equal "cost" and chooses one of them in a non-deterministic fashion if a matching packet arrives. config IP_ROUTE_VERBOSE bool "IP: verbose route monitoring" depends on IP_ADVANCED_ROUTER help If you say Y here, which is recommended, then the kernel will print verbose messages regarding the routing, for example warnings about received packets which look strange and could be evidence of an attack or a misconfigured system somewhere. The information is handled by the klogd daemon which is responsible for kernel messages ("man klogd"). config IP_ROUTE_CLASSID bool config IP_PNP bool "IP: kernel level autoconfiguration" help This enables automatic configuration of IP addresses of devices and of the routing table during kernel boot, based on either information supplied on the kernel command line or by BOOTP or RARP protocols. You need to say Y only for diskless machines requiring network access to boot (in which case you want to say Y to "Root file system on NFS" as well), because all other machines configure the network in their startup scripts. config IP_PNP_DHCP bool "IP: DHCP support" depends on IP_PNP ---help--- If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS and you want the IP address of your computer to be discovered automatically at boot time using the DHCP protocol (a special protocol designed for doing this job), say Y here. In case the boot ROM of your network card was designed for booting Linux and does DHCP itself, providing all necessary information on the kernel command line, you can say N here. If unsure, say Y. Note that if you want to use DHCP, a DHCP server must be operating on your network. Read <file:Documentation/filesystems/nfs/nfsroot.txt> for details. config IP_PNP_BOOTP bool "IP: BOOTP support" depends on IP_PNP ---help--- If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS and you want the IP address of your computer to be discovered automatically at boot time using the BOOTP protocol (a special protocol designed for doing this job), say Y here. In case the boot ROM of your network card was designed for booting Linux and does BOOTP itself, providing all necessary information on the kernel command line, you can say N here. If unsure, say Y. Note that if you want to use BOOTP, a BOOTP server must be operating on your network. Read <file:Documentation/filesystems/nfs/nfsroot.txt> for details. config IP_PNP_RARP bool "IP: RARP support" depends on IP_PNP help If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS and you want the IP address of your computer to be discovered automatically at boot time using the RARP protocol (an older protocol which is being obsoleted by BOOTP and DHCP), say Y here. Note that if you want to use RARP, a RARP server must be operating on your network. Read <file:Documentation/filesystems/nfs/nfsroot.txt> for details. config NET_IPIP tristate "IP: tunneling" select INET_TUNNEL select NET_IP_TUNNEL ---help--- Tunneling means encapsulating data of one protocol type within another protocol and sending it over a channel that understands the encapsulating protocol. This particular tunneling driver implements encapsulation of IP within IP, which sounds kind of pointless, but can be useful if you want to make your (or some other) machine appear on a different network than it physically is, or to use mobile-IP facilities (allowing laptops to seamlessly move between networks without changing their IP addresses). Saying Y to this option will produce two modules ( = code which can be inserted in and removed from the running kernel whenever you want). Most people won't need this and can say N. config NET_IPGRE_DEMUX tristate "IP: GRE demultiplexer" help This is helper module to demultiplex GRE packets on GRE version field criteria. Required by ip_gre and pptp modules. config NET_IP_TUNNEL tristate select DST_CACHE select GRO_CELLS default n config NET_IPGRE tristate "IP: GRE tunnels over IP" depends on (IPV6 || IPV6=n) && NET_IPGRE_DEMUX select NET_IP_TUNNEL help Tunneling means encapsulating data of one protocol type within another protocol and sending it over a channel that understands the encapsulating protocol. This particular tunneling driver implements GRE (Generic Routing Encapsulation) and at this time allows encapsulating of IPv4 or IPv6 over existing IPv4 infrastructure. This driver is useful if the other endpoint is a Cisco router: Cisco likes GRE much better than the other Linux tunneling driver ("IP tunneling" above). In addition, GRE allows multicast redistribution through the tunnel. config NET_IPGRE_BROADCAST bool "IP: broadcast GRE over IP" depends on IP_MULTICAST && NET_IPGRE help One application of GRE/IP is to construct a broadcast WAN (Wide Area Network), which looks like a normal Ethernet LAN (Local Area Network), but can be distributed all over the Internet. If you want to do that, say Y here and to "IP multicast routing" below. config IP_MROUTE_COMMON bool depends on IP_MROUTE || IPV6_MROUTE config IP_MROUTE bool "IP: multicast routing" depends on IP_MULTICAST select IP_MROUTE_COMMON help This is used if you want your machine to act as a router for IP packets that have several destination addresses. It is needed on the MBONE, a high bandwidth network on top of the Internet which carries audio and video broadcasts. In order to do that, you would most likely run the program mrouted. If you haven't heard about it, you don't need it. config IP_MROUTE_MULTIPLE_TABLES bool "IP: multicast policy routing" depends on IP_MROUTE && IP_ADVANCED_ROUTER select FIB_RULES help Normally, a multicast router runs a userspace daemon and decides what to do with a multicast packet based on the source and destination addresses. If you say Y here, the multicast router will also be able to take interfaces and packet marks into account and run multiple instances of userspace daemons simultaneously, each one handling a single table. If unsure, say N. config IP_PIMSM_V1 bool "IP: PIM-SM version 1 support" depends on IP_MROUTE help Kernel side support for Sparse Mode PIM (Protocol Independent Multicast) version 1. This multicast routing protocol is used widely because Cisco supports it. You need special software to use it (pimd-v1). Please see <http://netweb.usc.edu/pim/> for more information about PIM. Say Y if you want to use PIM-SM v1. Note that you can say N here if you just want to use Dense Mode PIM. config IP_PIMSM_V2 bool "IP: PIM-SM version 2 support" depends on IP_MROUTE help Kernel side support for Sparse Mode PIM version 2. In order to use this, you need an experimental routing daemon supporting it (pimd or gated-5). This routing protocol is not used widely, so say N unless you want to play with it. config SYN_COOKIES bool "IP: TCP syncookie support" ---help--- Normal TCP/IP networking is open to an attack known as "SYN flooding". This denial-of-service attack prevents legitimate remote users from being able to connect to your computer during an ongoing attack and requires very little work from the attacker, who can operate from anywhere on the Internet. SYN cookies provide protection against this type of attack. If you say Y here, the TCP/IP stack will use a cryptographic challenge protocol known as "SYN cookies" to enable legitimate users to continue to connect, even when your machine is under attack. There is no need for the legitimate users to change their TCP/IP software; SYN cookies work transparently to them. For technical information about SYN cookies, check out <http://cr.yp.to/syncookies.html>. If you are SYN flooded, the source address reported by the kernel is likely to have been forged by the attacker; it is only reported as an aid in tracing the packets to their actual source and should not be taken as absolute truth. SYN cookies may prevent correct error reporting on clients when the server is really overloaded. If this happens frequently better turn them off. If you say Y here, you can disable SYN cookies at run time by saying Y to "/proc file system support" and "Sysctl support" below and executing the command echo 0 > /proc/sys/net/ipv4/tcp_syncookies after the /proc file system has been mounted. If unsure, say N. config NET_IPVTI tristate "Virtual (secure) IP: tunneling" depends on IPV6 || IPV6=n select INET_TUNNEL select NET_IP_TUNNEL depends on INET_XFRM_MODE_TUNNEL ---help--- Tunneling means encapsulating data of one protocol type within another protocol and sending it over a channel that understands the encapsulating protocol. This can be used with xfrm mode tunnel to give the notion of a secure tunnel for IPSEC and then use routing protocol on top. config NET_UDP_TUNNEL tristate select NET_IP_TUNNEL default n config NET_FOU tristate "IP: Foo (IP protocols) over UDP" select XFRM select NET_UDP_TUNNEL ---help--- Foo over UDP allows any IP protocol to be directly encapsulated over UDP include tunnels (IPIP, GRE, SIT). By encapsulating in UDP network mechanisms and optimizations for UDP (such as ECMP and RSS) can be leveraged to provide better service. config NET_FOU_IP_TUNNELS bool "IP: FOU encapsulation of IP tunnels" depends on NET_IPIP || NET_IPGRE || IPV6_SIT select NET_FOU ---help--- Allow configuration of FOU or GUE encapsulation for IP tunnels. When this option is enabled IP tunnels can be configured to use FOU or GUE encapsulation. config INET_AH tristate "IP: AH transformation" select XFRM_ALGO select CRYPTO select CRYPTO_HMAC select CRYPTO_MD5 select CRYPTO_SHA1 ---help--- Support for IPsec AH. If unsure, say Y. config INET_ESP tristate "IP: ESP transformation" select XFRM_ALGO select CRYPTO select CRYPTO_AUTHENC select CRYPTO_HMAC select CRYPTO_MD5 select CRYPTO_CBC select CRYPTO_SHA1 select CRYPTO_DES select CRYPTO_ECHAINIV ---help--- Support for IPsec ESP. If unsure, say Y. config INET_ESP_OFFLOAD tristate "IP: ESP transformation offload" depends on INET_ESP select XFRM_OFFLOAD default n ---help--- Support for ESP transformation offload. This makes sense only if this system really does IPsec and want to do it with high throughput. A typical desktop system does not need it, even if it does IPsec. If unsure, say N. config INET_IPCOMP tristate "IP: IPComp transformation" select INET_XFRM_TUNNEL select XFRM_IPCOMP ---help--- Support for IP Payload Compression Protocol (IPComp) (RFC3173), typically needed for IPsec. If unsure, say Y. config INET_TABLE_PERTURB_ORDER int "INET: Source port perturbation table size (as power of 2)" if EXPERT default 16 help Source port perturbation table size (as power of 2) for RFC 6056 3.3.4. Algorithm 4: Double-Hash Port Selection Algorithm. The default is almost always what you want. Only change this if you know what you are doing. config INET_XFRM_TUNNEL tristate select INET_TUNNEL default n config INET_TUNNEL tristate default n config INET_XFRM_MODE_TRANSPORT tristate "IP: IPsec transport mode" default y select XFRM ---help--- Support for IPsec transport mode. If unsure, say Y. config INET_XFRM_MODE_TUNNEL tristate "IP: IPsec tunnel mode" default y select XFRM ---help--- Support for IPsec tunnel mode. If unsure, say Y. config INET_XFRM_MODE_BEET tristate "IP: IPsec BEET mode" default y select XFRM ---help--- Support for IPsec BEET mode. If unsure, say Y. config INET_DIAG tristate "INET: socket monitoring interface" default y ---help--- Support for INET (TCP, DCCP, etc) socket monitoring interface used by native Linux tools such as ss. ss is included in iproute2, currently downloadable at: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 If unsure, say Y. config INET_TCP_DIAG depends on INET_DIAG def_tristate INET_DIAG config INET_UDP_DIAG tristate "UDP: socket monitoring interface" depends on INET_DIAG && (IPV6 || IPV6=n) default n ---help--- Support for UDP socket monitoring interface used by the ss tool. If unsure, say Y. config INET_RAW_DIAG tristate "RAW: socket monitoring interface" depends on INET_DIAG && (IPV6 || IPV6=n) default n ---help--- Support for RAW socket monitoring interface used by the ss tool. If unsure, say Y. config INET_DIAG_DESTROY bool "INET: allow privileged process to administratively close sockets" depends on INET_DIAG default n ---help--- Provides a SOCK_DESTROY operation that allows privileged processes (e.g., a connection manager or a network administration tool such as ss) to close sockets opened by other processes. Closing a socket in this way interrupts any blocking read/write/connect operations on the socket and causes future socket calls to behave as if the socket had been disconnected. If unsure, say N. menuconfig TCP_CONG_ADVANCED bool "TCP: advanced congestion control" ---help--- Support for selection of various TCP congestion control modules. Nearly all users can safely say no here, and a safe default selection will be made (CUBIC with new Reno as a fallback). If unsure, say N. if TCP_CONG_ADVANCED config TCP_CONG_BIC tristate "Binary Increase Congestion (BIC) control" default m ---help--- BIC-TCP is a sender-side only change that ensures a linear RTT fairness under large windows while offering both scalability and bounded TCP-friendliness. The protocol combines two schemes called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures linear RTT fairness as well as good scalability. Under small congestion windows, binary search increase provides TCP friendliness. See http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/ config TCP_CONG_CUBIC tristate "CUBIC TCP" default y ---help--- This is version 2.0 of BIC-TCP which uses a cubic growth function among other techniques. See http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/cubic-paper.pdf config TCP_CONG_WESTWOOD tristate "TCP Westwood+" default m ---help--- TCP Westwood+ is a sender-side only modification of the TCP Reno protocol stack that optimizes the performance of TCP congestion control. It is based on end-to-end bandwidth estimation to set congestion window and slow start threshold after a congestion episode. Using this estimation, TCP Westwood+ adaptively sets a slow start threshold and a congestion window which takes into account the bandwidth used at the time congestion is experienced. TCP Westwood+ significantly increases fairness wrt TCP Reno in wired networks and throughput over wireless links. config TCP_CONG_HTCP tristate "H-TCP" default m ---help--- H-TCP is a send-side only modifications of the TCP Reno protocol stack that optimizes the performance of TCP congestion control for high speed network links. It uses a modeswitch to change the alpha and beta parameters of TCP Reno based on network conditions and in a way so as to be fair with other Reno and H-TCP flows. config TCP_CONG_HSTCP tristate "High Speed TCP" default n ---help--- Sally Floyd's High Speed TCP (RFC 3649) congestion control. A modification to TCP's congestion control mechanism for use with large congestion windows. A table indicates how much to increase the congestion window by when an ACK is received. For more detail see http://www.icir.org/floyd/hstcp.html config TCP_CONG_HYBLA tristate "TCP-Hybla congestion control algorithm" default n ---help--- TCP-Hybla is a sender-side only change that eliminates penalization of long-RTT, large-bandwidth connections, like when satellite legs are involved, especially when sharing a common bottleneck with normal terrestrial connections. config TCP_CONG_VEGAS tristate "TCP Vegas" default n ---help--- TCP Vegas is a sender-side only change to TCP that anticipates the onset of congestion by estimating the bandwidth. TCP Vegas adjusts the sending rate by modifying the congestion window. TCP Vegas should provide less packet loss, but it is not as aggressive as TCP Reno. config TCP_CONG_NV tristate "TCP NV" default n ---help--- TCP NV is a follow up to TCP Vegas. It has been modified to deal with 10G networks, measurement noise introduced by LRO, GRO and interrupt coalescence. In addition, it will decrease its cwnd multiplicatively instead of linearly. Note that in general congestion avoidance (cwnd decreased when # packets queued grows) cannot coexist with congestion control (cwnd decreased only when there is packet loss) due to fairness issues. One scenario when they can coexist safely is when the CA flows have RTTs << CC flows RTTs. For further details see http://www.brakmo.org/networking/tcp-nv/ config TCP_CONG_SCALABLE tristate "Scalable TCP" default n ---help--- Scalable TCP is a sender-side only change to TCP which uses a MIMD congestion control algorithm which has some nice scaling properties, though is known to have fairness issues. See http://www.deneholme.net/tom/scalable/ config TCP_CONG_LP tristate "TCP Low Priority" default n ---help--- TCP Low Priority (TCP-LP), a distributed algorithm whose goal is to utilize only the excess network bandwidth as compared to the ``fair share`` of bandwidth as targeted by TCP. See http://www-ece.rice.edu/networks/TCP-LP/ config TCP_CONG_VENO tristate "TCP Veno" default n ---help--- TCP Veno is a sender-side only enhancement of TCP to obtain better throughput over wireless networks. TCP Veno makes use of state distinguishing to circumvent the difficult judgment of the packet loss type. TCP Veno cuts down less congestion window in response to random loss packets. See <http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1177186> config TCP_CONG_YEAH tristate "YeAH TCP" select TCP_CONG_VEGAS default n ---help--- YeAH-TCP is a sender-side high-speed enabled TCP congestion control algorithm, which uses a mixed loss/delay approach to compute the congestion window. It's design goals target high efficiency, internal, RTT and Reno fairness, resilience to link loss while keeping network elements load as low as possible. For further details look here: http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf config TCP_CONG_ILLINOIS tristate "TCP Illinois" default n ---help--- TCP-Illinois is a sender-side modification of TCP Reno for high speed long delay links. It uses round-trip-time to adjust the alpha and beta parameters to achieve a higher average throughput and maintain fairness. For further details see: http://www.ews.uiuc.edu/~shaoliu/tcpillinois/index.html config TCP_CONG_DCTCP tristate "DataCenter TCP (DCTCP)" default n ---help--- DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. It is designed to provide: - High burst tolerance (incast due to partition/aggregate), - Low latency (short flows, queries), - High throughput (continuous data updates, large file transfers) with commodity, shallow-buffered switches. All switches in the data center network running DCTCP must support ECN marking and be configured for marking when reaching defined switch buffer thresholds. The default ECN marking threshold heuristic for DCTCP on switches is 20 packets (30KB) at 1Gbps, and 65 packets (~100KB) at 10Gbps, but might need further careful tweaking. For further details see: http://simula.stanford.edu/~alizade/Site/DCTCP_files/dctcp-final.pdf config TCP_CONG_CDG tristate "CAIA Delay-Gradient (CDG)" default n ---help--- CAIA Delay-Gradient (CDG) is a TCP congestion control that modifies the TCP sender in order to: o Use the delay gradient as a congestion signal. o Back off with an average probability that is independent of the RTT. o Coexist with flows that use loss-based congestion control. o Tolerate packet loss unrelated to congestion. For further details see: D.A. Hayes and G. Armitage. "Revisiting TCP congestion control using delay gradients." In Networking 2011. Preprint: http://goo.gl/No3vdg config TCP_CONG_BBR tristate "BBR TCP" default n ---help--- BBR (Bottleneck Bandwidth and RTT) TCP congestion control aims to maximize network utilization and minimize queues. It builds an explicit model of the the bottleneck delivery rate and path round-trip propagation delay. It tolerates packet loss and delay unrelated to congestion. It can operate over LAN, WAN, cellular, wifi, or cable modem links. It can coexist with flows that use loss-based congestion control, and can operate with shallow buffers, deep buffers, bufferbloat, policers, or AQM schemes that do not provide a delay signal. It requires the fq ("Fair Queue") pacing packet scheduler. #if defined(CONFIG_BCM_KF_MPTCP) && defined(CONFIG_BCM_MPTCP) config TCP_CONG_LIA tristate "MPTCP Linked Increase" depends on MPTCP default n ---help--- MultiPath TCP Linked Increase Congestion Control To enable it, just put 'lia' in tcp_congestion_control config TCP_CONG_OLIA tristate "MPTCP Opportunistic Linked Increase" depends on MPTCP default n ---help--- MultiPath TCP Opportunistic Linked Increase Congestion Control To enable it, just put 'olia' in tcp_congestion_control config TCP_CONG_WVEGAS tristate "MPTCP WVEGAS CONGESTION CONTROL" depends on MPTCP default n ---help--- wVegas congestion control for MPTCP To enable it, just put 'wvegas' in tcp_congestion_control config TCP_CONG_BALIA tristate "MPTCP BALIA CONGESTION CONTROL" depends on MPTCP default n ---help--- Multipath TCP Balanced Linked Adaptation Congestion Control To enable it, just put 'balia' in tcp_congestion_control config TCP_CONG_MCTCPDESYNC tristate "DESYNCHRONIZED MCTCP CONGESTION CONTROL (EXPERIMENTAL)" depends on MPTCP default n ---help--- Desynchronized MultiChannel TCP Congestion Control. This is experimental code that only supports single path and must have set mptcp_ndiffports larger than one. To enable it, just put 'mctcpdesync' in tcp_congestion_control For further details see: http://ieeexplore.ieee.org/abstract/document/6911722/ https://doi.org/10.1016/j.comcom.2015.07.010 #endif choice prompt "Default TCP congestion control" default DEFAULT_CUBIC help Select the TCP congestion control that will be used by default for all connections. config DEFAULT_BIC bool "Bic" if TCP_CONG_BIC=y config DEFAULT_CUBIC bool "Cubic" if TCP_CONG_CUBIC=y config DEFAULT_HTCP bool "Htcp" if TCP_CONG_HTCP=y config DEFAULT_HYBLA bool "Hybla" if TCP_CONG_HYBLA=y config DEFAULT_VEGAS bool "Vegas" if TCP_CONG_VEGAS=y config DEFAULT_VENO bool "Veno" if TCP_CONG_VENO=y config DEFAULT_WESTWOOD bool "Westwood" if TCP_CONG_WESTWOOD=y config DEFAULT_DCTCP bool "DCTCP" if TCP_CONG_DCTCP=y config DEFAULT_CDG bool "CDG" if TCP_CONG_CDG=y config DEFAULT_BBR bool "BBR" if TCP_CONG_BBR=y #if defined(CONFIG_BCM_KF_MPTCP) && defined(CONFIG_BCM_MPTCP) config DEFAULT_LIA bool "Lia" if TCP_CONG_LIA=y config DEFAULT_OLIA bool "Olia" if TCP_CONG_OLIA=y config DEFAULT_WVEGAS bool "Wvegas" if TCP_CONG_WVEGAS=y config DEFAULT_BALIA bool "Balia" if TCP_CONG_BALIA=y config DEFAULT_MCTCPDESYNC bool "Mctcpdesync (EXPERIMENTAL)" if TCP_CONG_MCTCPDESYNC=y #endif config DEFAULT_RENO bool "Reno" endchoice endif config TCP_CONG_CUBIC tristate depends on !TCP_CONG_ADVANCED default y config DEFAULT_TCP_CONG string default "bic" if DEFAULT_BIC default "cubic" if DEFAULT_CUBIC default "htcp" if DEFAULT_HTCP default "hybla" if DEFAULT_HYBLA default "vegas" if DEFAULT_VEGAS default "westwood" if DEFAULT_WESTWOOD default "veno" if DEFAULT_VENO #if defined(CONFIG_BCM_KF_MPTCP) && defined(CONFIG_BCM_MPTCP) default "lia" if DEFAULT_LIA default "olia" if DEFAULT_OLIA default "wvegas" if DEFAULT_WVEGAS default "balia" if DEFAULT_BALIA #endif default "reno" if DEFAULT_RENO default "dctcp" if DEFAULT_DCTCP default "cdg" if DEFAULT_CDG default "bbr" if DEFAULT_BBR default "cubic" config TCP_MD5SIG bool "TCP: MD5 Signature Option support (RFC2385)" select CRYPTO select CRYPTO_MD5 ---help--- RFC2385 specifies a method of giving MD5 protection to TCP sessions. Its main (only?) use is to protect BGP sessions between core routers on the Internet. If unsure, say N.
10-29
Quickstart Note The data files used in the quickstart guide are updated from time to time, which means that the adjusted close changes and with it the close (and the other components). That means that the actual output may be different to what was put in the documentation at the time of writing. Using the platform Let’s run through a series of examples (from almost an empty one to a fully fledged strategy) but not without before roughly explaining 2 basic concepts when working with backtrader Lines Data Feeds, Indicators and Strategies have lines. A line is a succession of points that when joined together form this line. When talking about the markets, a Data Feed has usually the following set of points per day: Open, High, Low, Close, Volume, OpenInterest The series of “Open”s along time is a Line. And therefore a Data Feed has usually 6 lines. If we also consider “DateTime” (which is the actual reference for a single point), we could count 7 lines. Index 0 Approach When accessing the values in a line, the current value is accessed with index: 0 And the “last” output value is accessed with -1. This in line with Python conventions for iterables (and a line can be iterated and is therefore an iterable) where index -1 is used to access the “last” item of the iterable/array. In our case is the last output value what’s getting accessed. As such and being index 0 right after -1, it is used to access the current moment in line. With that in mind and if we imagine a Strategy featuring a Simple Moving average created during initialization: self.sma = SimpleMovingAverage(.....) The easiest and simplest way to access the current value of this moving average: av = self.sma[0] There is no need to know how many bars/minutes/days/months have been processed, because “0” uniquely identifies the current instant. Following pythonic tradition, the “last” output value is accessed using -1: previous_value = self.sma[-1] Of course earlier output values can be accessed with -2, -3, … From 0 to 100: the samples Basic Setup Let’s get running. from __future__ import (absolute_import, division, print_function, unicode_literals) import backtrader as bt if __name__ == '__main__': cerebro = bt.Cerebro() print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) cerebro.run() print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 10000.00 Final Portfolio Value: 10000.00 In this example: backtrader was imported The Cerebro engine was instantiated The resulting cerebro instance was told to run (loop over data) And the resulting outcome was printed out Although it doesn’t seem much, let’s point out something explicitly shown: The Cerebro engine has created a broker instance in the background The instance already has some cash to start with This behind the scenes broker instantiation is a constant trait in the platform to simplify the life of the user. If no broker is set by the user, a default one is put in place. And 10K monetary units is a usual value with some brokers to begin with. Setting the Cash In the world of finance, for sure only “losers” start with 10k. Let’s change the cash and run the example again. from __future__ import (absolute_import, division, print_function, unicode_literals) import backtrader as bt if __name__ == '__main__': cerebro = bt.Cerebro() cerebro.broker.setcash(100000.0) print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) cerebro.run() print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 1000000.00 Final Portfolio Value: 1000000.00 Mission accomplished. Let’s move to tempestuous waters. Adding a Data Feed Having cash is fun, but the purpose behind all this is to let an automated strategy multiply the cash without moving a finger by operating on an asset which we see as a Data Feed Ergo … No Data Feed -> No Fun. Let’s add one to the ever growing example. from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values after this date todate=datetime.datetime(2000, 12, 31), reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 1000000.00 Final Portfolio Value: 1000000.00 The amount of boilerplate has grown slightly, because we added: Finding out where our example script is to be able to locate the sample Data Feed file Having datetime objects to filter on which data from the Data Feed we will be operating Aside from that, the Data Feed is created and added to cerebro. The output has not changed and it would be a miracle if it had. Note Yahoo Online sends the CSV data in date descending order, which is not the standard convention. The reversed=True prameter takes into account that the CSV data in the file has already been reversed and has the standard expected date ascending order. Our First Strategy The cash is in the broker and the Data Feed is there. It seems like risky business is just around the corner. Let’s put a Strategy into the equation and print the “Close” price of each day (bar). DataSeries (the underlying class in Data Feeds) objects have aliases to access the well known OHLC (Open High Low Close) daily values. This should ease up the creation of our printing logic. from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): def log(self, txt, dt=None): ''' Logging function for this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 100000.00 2000-01-03T00:00:00, Close, 27.85 2000-01-04T00:00:00, Close, 25.39 2000-01-05T00:00:00, Close, 24.05 ... ... ... 2000-12-26T00:00:00, Close, 29.17 2000-12-27T00:00:00, Close, 28.94 2000-12-28T00:00:00, Close, 29.29 2000-12-29T00:00:00, Close, 27.41 Final Portfolio Value: 100000.00 Someone said the stockmarket was risky business, but it doesn’t seem so. Let’s explain some of the magic: Upon init being called the strategy already has a list of datas that are present in the platform This is a standard Python list and datas can be accessed in the order they were inserted. The first data in the list self.datas[0] is the default data for trading operations and to keep all strategy elements synchronized (it’s the system clock) self.dataclose = self.datas[0].close keeps a reference to the close line. Only one level of indirection is later needed to access the close values. The strategy next method will be called on each bar of the system clock (self.datas[0]). This is true until other things come into play like indicators, which need some bars to start producing an output. More on that later. Adding some Logic to the Strategy Let’s try some crazy idea we had by looking at some charts If the price has been falling 3 sessions in a row … BUY BUY BUY!!! from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) if self.dataclose[0] < self.dataclose[-1]: # current close less than previous close if self.dataclose[-1] < self.dataclose[-2]: # previous close less than the previous close # BUY, BUY, BUY!!! (with all possible default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) self.buy() if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 100000.00 2000-01-03, Close, 27.85 2000-01-04, Close, 25.39 2000-01-05, Close, 24.05 2000-01-05, BUY CREATE, 24.05 2000-01-06, Close, 22.63 2000-01-06, BUY CREATE, 22.63 2000-01-07, Close, 24.37 ... ... ... 2000-12-20, BUY CREATE, 26.88 2000-12-21, Close, 27.82 2000-12-22, Close, 30.06 2000-12-26, Close, 29.17 2000-12-27, Close, 28.94 2000-12-27, BUY CREATE, 28.94 2000-12-28, Close, 29.29 2000-12-29, Close, 27.41 Final Portfolio Value: 99725.08 Several “BUY” creation orders were issued, our porftolio value was decremented. A couple of important things are clearly missing. The order was created but it is unknown if it was executed, when and at what price. The next example will build upon that by listening to notifications of order status. The curious reader may ask how many shares are being bought, what asset is being bought and how are orders being executed. Where possible (and in this case it is) the platform fills in the gaps: self.datas[0] (the main data aka system clock) is the target asset if no other one is specified The stake is provided behind the scenes by a position sizer which uses a fixed stake, being the default “1”. It will be modified in a later example The order is executed “At Market”. The broker (shown in previous examples) executes this using the opening price of the next bar, because that’s the 1st tick after the current under examination bar. The order is executed so far without any commission (more on that later) Do not only buy … but SELL After knowing how to enter the market (long), an “exit concept” is needed and also understanding whether the strategy is in the market. Luckily a Strategy object offers access to a position attribute for the default data feed Methods buy and sell return the created (not yet executed) order Changes in orders’ status will be notified to the strategy via a notify method The “exit concept” will be an easy one: Exit after 5 bars (on the 6th bar) have elapsed for good or for worse Please notice that there is no “time” or “timeframe” implied: number of bars. The bars can represent 1 minute, 1 hour, 1 day, 1 week or any other time period. Although we know the data source is a daily one, the strategy makes no assumption about that. Additionally and to simplify: Do only allow a Buy order if not yet in the market Note The next method gets no “bar index” passed and therefore it seems obscure how to understand when 5 bars may have elapsed, but this has been modeled in pythonic way: call len on an object and it will tell you the length of its lines. Just write down (save in a variable) at which length in an operation took place and see if the current length is 5 bars away. from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close # To keep track of pending orders self.order = None def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: # Buy/Sell order submitted/accepted to/by broker - Nothing to do return # Check if an order has been completed # Attention: broker could reject order if not enough cash if order.status in [order.Completed]: if order.isbuy(): self.log('BUY EXECUTED, %.2f' % order.executed.price) elif order.issell(): self.log('SELL EXECUTED, %.2f' % order.executed.price) self.bar_executed = len(self) elif order.status in [order.Canceled, order.Margin, order.Rejected]: self.log('Order Canceled/Margin/Rejected') # Write down: no pending order self.order = None def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) # Check if an order is pending ... if yes, we cannot send a 2nd one if self.order: return # Check if we are in the market if not self.position: # Not yet ... we MIGHT BUY if ... if self.dataclose[0] < self.dataclose[-1]: # current close less than previous close if self.dataclose[-1] < self.dataclose[-2]: # previous close less than the previous close # BUY, BUY, BUY!!! (with default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.buy() else: # Already in the market ... we might sell if len(self) >= (self.bar_executed + 5): # SELL, SELL, SELL!!! (with all possible default parameters) self.log('SELL CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.sell() if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 100000.00 2000-01-03T00:00:00, Close, 27.85 2000-01-04T00:00:00, Close, 25.39 2000-01-05T00:00:00, Close, 24.05 2000-01-05T00:00:00, BUY CREATE, 24.05 2000-01-06T00:00:00, BUY EXECUTED, 23.61 2000-01-06T00:00:00, Close, 22.63 2000-01-07T00:00:00, Close, 24.37 2000-01-10T00:00:00, Close, 27.29 2000-01-11T00:00:00, Close, 26.49 2000-01-12T00:00:00, Close, 24.90 2000-01-13T00:00:00, Close, 24.77 2000-01-13T00:00:00, SELL CREATE, 24.77 2000-01-14T00:00:00, SELL EXECUTED, 25.70 2000-01-14T00:00:00, Close, 25.18 ... ... ... 2000-12-15T00:00:00, SELL CREATE, 26.93 2000-12-18T00:00:00, SELL EXECUTED, 28.29 2000-12-18T00:00:00, Close, 30.18 2000-12-19T00:00:00, Close, 28.88 2000-12-20T00:00:00, Close, 26.88 2000-12-20T00:00:00, BUY CREATE, 26.88 2000-12-21T00:00:00, BUY EXECUTED, 26.23 2000-12-21T00:00:00, Close, 27.82 2000-12-22T00:00:00, Close, 30.06 2000-12-26T00:00:00, Close, 29.17 2000-12-27T00:00:00, Close, 28.94 2000-12-28T00:00:00, Close, 29.29 2000-12-29T00:00:00, Close, 27.41 2000-12-29T00:00:00, SELL CREATE, 27.41 Final Portfolio Value: 100018.53 Blistering Barnacles!!! The system made money … something must be wrong The broker says: Show me the money! And the money is called “commission”. Let’s add a reasonable 0.1% commision rate per operation (both for buying and selling … yes the broker is avid …) A single line will suffice for it: # 0.1% ... divide by 100 to remove the % cerebro.broker.setcommission(commission=0.001) Being experienced with the platform we want to see the profit or loss after a buy/sell cycle, with and without commission. from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close # To keep track of pending orders and buy price/commission self.order = None self.buyprice = None self.buycomm = None def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: # Buy/Sell order submitted/accepted to/by broker - Nothing to do return # Check if an order has been completed # Attention: broker could reject order if not enough cash if order.status in [order.Completed]: if order.isbuy(): self.log( 'BUY EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.buyprice = order.executed.price self.buycomm = order.executed.comm else: # Sell self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.bar_executed = len(self) elif order.status in [order.Canceled, order.Margin, order.Rejected]: self.log('Order Canceled/Margin/Rejected') self.order = None def notify_trade(self, trade): if not trade.isclosed: return self.log('OPERATION PROFIT, GROSS %.2f, NET %.2f' % (trade.pnl, trade.pnlcomm)) def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) # Check if an order is pending ... if yes, we cannot send a 2nd one if self.order: return # Check if we are in the market if not self.position: # Not yet ... we MIGHT BUY if ... if self.dataclose[0] < self.dataclose[-1]: # current close less than previous close if self.dataclose[-1] < self.dataclose[-2]: # previous close less than the previous close # BUY, BUY, BUY!!! (with default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.buy() else: # Already in the market ... we might sell if len(self) >= (self.bar_executed + 5): # SELL, SELL, SELL!!! (with all possible default parameters) self.log('SELL CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.sell() if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) # Set the commission - 0.1% ... divide by 100 to remove the % cerebro.broker.setcommission(commission=0.001) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 100000.00 2000-01-03T00:00:00, Close, 27.85 2000-01-04T00:00:00, Close, 25.39 2000-01-05T00:00:00, Close, 24.05 2000-01-05T00:00:00, BUY CREATE, 24.05 2000-01-06T00:00:00, BUY EXECUTED, Price: 23.61, Cost: 23.61, Commission 0.02 2000-01-06T00:00:00, Close, 22.63 2000-01-07T00:00:00, Close, 24.37 2000-01-10T00:00:00, Close, 27.29 2000-01-11T00:00:00, Close, 26.49 2000-01-12T00:00:00, Close, 24.90 2000-01-13T00:00:00, Close, 24.77 2000-01-13T00:00:00, SELL CREATE, 24.77 2000-01-14T00:00:00, SELL EXECUTED, Price: 25.70, Cost: 25.70, Commission 0.03 2000-01-14T00:00:00, OPERATION PROFIT, GROSS 2.09, NET 2.04 2000-01-14T00:00:00, Close, 25.18 ... ... ... 2000-12-15T00:00:00, SELL CREATE, 26.93 2000-12-18T00:00:00, SELL EXECUTED, Price: 28.29, Cost: 28.29, Commission 0.03 2000-12-18T00:00:00, OPERATION PROFIT, GROSS -0.06, NET -0.12 2000-12-18T00:00:00, Close, 30.18 2000-12-19T00:00:00, Close, 28.88 2000-12-20T00:00:00, Close, 26.88 2000-12-20T00:00:00, BUY CREATE, 26.88 2000-12-21T00:00:00, BUY EXECUTED, Price: 26.23, Cost: 26.23, Commission 0.03 2000-12-21T00:00:00, Close, 27.82 2000-12-22T00:00:00, Close, 30.06 2000-12-26T00:00:00, Close, 29.17 2000-12-27T00:00:00, Close, 28.94 2000-12-28T00:00:00, Close, 29.29 2000-12-29T00:00:00, Close, 27.41 2000-12-29T00:00:00, SELL CREATE, 27.41 Final Portfolio Value: 100016.98 God Save the Queen!!! The system still made money. Before moving on, let’s notice something by filtering the “OPERATION PROFIT” lines: 2000-01-14T00:00:00, OPERATION PROFIT, GROSS 2.09, NET 2.04 2000-02-07T00:00:00, OPERATION PROFIT, GROSS 3.68, NET 3.63 2000-02-28T00:00:00, OPERATION PROFIT, GROSS 4.48, NET 4.42 2000-03-13T00:00:00, OPERATION PROFIT, GROSS 3.48, NET 3.41 2000-03-22T00:00:00, OPERATION PROFIT, GROSS -0.41, NET -0.49 2000-04-07T00:00:00, OPERATION PROFIT, GROSS 2.45, NET 2.37 2000-04-20T00:00:00, OPERATION PROFIT, GROSS -1.95, NET -2.02 2000-05-02T00:00:00, OPERATION PROFIT, GROSS 5.46, NET 5.39 2000-05-11T00:00:00, OPERATION PROFIT, GROSS -3.74, NET -3.81 2000-05-30T00:00:00, OPERATION PROFIT, GROSS -1.46, NET -1.53 2000-07-05T00:00:00, OPERATION PROFIT, GROSS -1.62, NET -1.69 2000-07-14T00:00:00, OPERATION PROFIT, GROSS 2.08, NET 2.01 2000-07-28T00:00:00, OPERATION PROFIT, GROSS 0.14, NET 0.07 2000-08-08T00:00:00, OPERATION PROFIT, GROSS 4.36, NET 4.29 2000-08-21T00:00:00, OPERATION PROFIT, GROSS 1.03, NET 0.95 2000-09-15T00:00:00, OPERATION PROFIT, GROSS -4.26, NET -4.34 2000-09-27T00:00:00, OPERATION PROFIT, GROSS 1.29, NET 1.22 2000-10-13T00:00:00, OPERATION PROFIT, GROSS -2.98, NET -3.04 2000-10-26T00:00:00, OPERATION PROFIT, GROSS 3.01, NET 2.95 2000-11-06T00:00:00, OPERATION PROFIT, GROSS -3.59, NET -3.65 2000-11-16T00:00:00, OPERATION PROFIT, GROSS 1.28, NET 1.23 2000-12-01T00:00:00, OPERATION PROFIT, GROSS 2.59, NET 2.54 2000-12-18T00:00:00, OPERATION PROFIT, GROSS -0.06, NET -0.12 Adding up the “NET” profits the final figure is: 15.83 But the system said the following at the end: 2000-12-29T00:00:00, SELL CREATE, 27.41 Final Portfolio Value: 100016.98 And obviously 15.83 is not 16.98. There is no error whatsoever. The “NET” profit of 15.83 is already cash in the bag. Unfortunately (or fortunately to better understand the platform) there is an open position on the last day of the Data Feed. Even if a SELL operation has been sent … IT HAS NOT YET BEEN EXECUTED. The “Final Portfolio Value” calculated by the broker takes into account the “Close” price on 2000-12-29. The actual execution price would have been set on the next trading day which happened to be 2001-01-02. Extending the Data Feed” to take into account this day the output is: 2001-01-02T00:00:00, SELL EXECUTED, Price: 27.87, Cost: 27.87, Commission 0.03 2001-01-02T00:00:00, OPERATION PROFIT, GROSS 1.64, NET 1.59 2001-01-02T00:00:00, Close, 24.87 2001-01-02T00:00:00, BUY CREATE, 24.87 Final Portfolio Value: 100017.41 Now adding the previous NET profit to the completed operation’s net profit: 15.83 + 1.59 = 17.42 Which (discarding rounding errors in the “print” statements) is the extra Portfolio above the initial 100000 monetary units the strategy started with. Customizing the Strategy: Parameters It would a bit unpractical to hardcode some of the values in the strategy and have no chance to change them easily. Parameters come in handy to help. Definition of parameters is easy and looks like: params = (('myparam', 27), ('exitbars', 5),) Being this a standard Python tuple with some tuples inside it, the following may look more appealling to some: params = ( ('myparam', 27), ('exitbars', 5), ) With either formatting parametrization of the strategy is allowed when adding the strategy to the Cerebro engine: # Add a strategy cerebro.addstrategy(TestStrategy, myparam=20, exitbars=7) Note The setsizing method below is deprecated. This content is kept here for anyone looking at old samples of the sources. The sources have been update to use: cerebro.addsizer(bt.sizers.FixedSize, stake=10)`` Please read the section about sizers Using the parameters in the strategy is easy, as they are stored in a “params” attribute. If we for example want to set the stake fix, we can pass the stake parameter to the position sizer like this durint init: # Set the sizer stake from the params self.sizer.setsizing(self.params.stake) We could have also called buy and sell with a stake parameter and self.params.stake as the value. The logic to exit gets modified: # Already in the market ... we might sell if len(self) >= (self.bar_executed + self.params.exitbars): With all this in mind the example evolves to look like: from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): params = ( ('exitbars', 5), ) def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close # To keep track of pending orders and buy price/commission self.order = None self.buyprice = None self.buycomm = None def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: # Buy/Sell order submitted/accepted to/by broker - Nothing to do return # Check if an order has been completed # Attention: broker could reject order if not enough cash if order.status in [order.Completed]: if order.isbuy(): self.log( 'BUY EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.buyprice = order.executed.price self.buycomm = order.executed.comm else: # Sell self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.bar_executed = len(self) elif order.status in [order.Canceled, order.Margin, order.Rejected]: self.log('Order Canceled/Margin/Rejected') self.order = None def notify_trade(self, trade): if not trade.isclosed: return self.log('OPERATION PROFIT, GROSS %.2f, NET %.2f' % (trade.pnl, trade.pnlcomm)) def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) # Check if an order is pending ... if yes, we cannot send a 2nd one if self.order: return # Check if we are in the market if not self.position: # Not yet ... we MIGHT BUY if ... if self.dataclose[0] < self.dataclose[-1]: # current close less than previous close if self.dataclose[-1] < self.dataclose[-2]: # previous close less than the previous close # BUY, BUY, BUY!!! (with default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.buy() else: # Already in the market ... we might sell if len(self) >= (self.bar_executed + self.params.exitbars): # SELL, SELL, SELL!!! (with all possible default parameters) self.log('SELL CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.sell() if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) # Add a FixedSize sizer according to the stake cerebro.addsizer(bt.sizers.FixedSize, stake=10) # Set the commission - 0.1% ... divide by 100 to remove the % cerebro.broker.setcommission(commission=0.001) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) After the execution the output is: Starting Portfolio Value: 100000.00 2000-01-03T00:00:00, Close, 27.85 2000-01-04T00:00:00, Close, 25.39 2000-01-05T00:00:00, Close, 24.05 2000-01-05T00:00:00, BUY CREATE, 24.05 2000-01-06T00:00:00, BUY EXECUTED, Size 10, Price: 23.61, Cost: 236.10, Commission 0.24 2000-01-06T00:00:00, Close, 22.63 ... ... ... 2000-12-20T00:00:00, BUY CREATE, 26.88 2000-12-21T00:00:00, BUY EXECUTED, Size 10, Price: 26.23, Cost: 262.30, Commission 0.26 2000-12-21T00:00:00, Close, 27.82 2000-12-22T00:00:00, Close, 30.06 2000-12-26T00:00:00, Close, 29.17 2000-12-27T00:00:00, Close, 28.94 2000-12-28T00:00:00, Close, 29.29 2000-12-29T00:00:00, Close, 27.41 2000-12-29T00:00:00, SELL CREATE, 27.41 Final Portfolio Value: 100169.80 In order to see the difference, the print outputs have also been extended to show the execution size. Having multiplied the stake by 10, the obvious has happened: the profit and loss has been multiplied by 10. Instead of 16.98, the surplus is now 169.80 Adding an indicator Having heard of indicators, the next thing anyone would add to the strategy is one of them. For sure they must be much better than a simple “3 lower closes” strategy. Inspired in one of the examples from PyAlgoTrade a strategy using a Simple Moving Average. Buy “AtMarket” if the close is greater than the Average If in the market, sell if the close is smaller than the Average Only 1 active operation is allowed in the market Most of the existing code can be kept in place. Let’s add the average during init and keep a reference to it: self.sma = bt.indicators.MovingAverageSimple(self.datas[0], period=self.params.maperiod) And of course the logic to enter and exit the market will rely on the Average values. Look in the code for the logic. Note The starting cash will be 1000 monetary units to be in line with the PyAlgoTrade example and no commission will be applied from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): params = ( ('maperiod', 15), ) def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close # To keep track of pending orders and buy price/commission self.order = None self.buyprice = None self.buycomm = None # Add a MovingAverageSimple indicator self.sma = bt.indicators.SimpleMovingAverage( self.datas[0], period=self.params.maperiod) def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: # Buy/Sell order submitted/accepted to/by broker - Nothing to do return # Check if an order has been completed # Attention: broker could reject order if not enough cash if order.status in [order.Completed]: if order.isbuy(): self.log( 'BUY EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.buyprice = order.executed.price self.buycomm = order.executed.comm else: # Sell self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.bar_executed = len(self) elif order.status in [order.Canceled, order.Margin, order.Rejected]: self.log('Order Canceled/Margin/Rejected') self.order = None def notify_trade(self, trade): if not trade.isclosed: return self.log('OPERATION PROFIT, GROSS %.2f, NET %.2f' % (trade.pnl, trade.pnlcomm)) def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) # Check if an order is pending ... if yes, we cannot send a 2nd one if self.order: return # Check if we are in the market if not self.position: # Not yet ... we MIGHT BUY if ... if self.dataclose[0] > self.sma[0]: # BUY, BUY, BUY!!! (with all possible default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.buy() else: if self.dataclose[0] < self.sma[0]: # SELL, SELL, SELL!!! (with all possible default parameters) self.log('SELL CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.sell() if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(1000.0) # Add a FixedSize sizer according to the stake cerebro.addsizer(bt.sizers.FixedSize, stake=10) # Set the commission cerebro.broker.setcommission(commission=0.0) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) Now, before skipping to the next section LOOK CAREFULLY to the first date which is shown in the log: It’ no longer 2000-01-03, the first trading day in the year 2K. It’s 2000-01-24 … Who has stolen my cheese? The missing days are not missing. The platform has adapted to the new circumstances: An indicator (SimpleMovingAverage) has been added to the Strategy. This indicator needs X bars to produce an output: in the example: 15 2000-01-24 is the day in which the 15th bar occurs The backtrader platform assumes that the Strategy has the indicator in place for a good reason, to use it in the decision making process. And it makes no sense to try to make decisions if the indicator is not yet ready and producing values. next will be 1st called when all indicators have already reached the minimum needed period to produce a value In the example there is a single indicator, but the strategy could have any number of them. After the execution the output is: Starting Portfolio Value: 1000.00 2000-01-24T00:00:00, Close, 25.55 2000-01-25T00:00:00, Close, 26.61 2000-01-25T00:00:00, BUY CREATE, 26.61 2000-01-26T00:00:00, BUY EXECUTED, Size 10, Price: 26.76, Cost: 267.60, Commission 0.00 2000-01-26T00:00:00, Close, 25.96 2000-01-27T00:00:00, Close, 24.43 2000-01-27T00:00:00, SELL CREATE, 24.43 2000-01-28T00:00:00, SELL EXECUTED, Size 10, Price: 24.28, Cost: 242.80, Commission 0.00 2000-01-28T00:00:00, OPERATION PROFIT, GROSS -24.80, NET -24.80 2000-01-28T00:00:00, Close, 22.34 2000-01-31T00:00:00, Close, 23.55 2000-02-01T00:00:00, Close, 25.46 2000-02-02T00:00:00, Close, 25.61 2000-02-02T00:00:00, BUY CREATE, 25.61 2000-02-03T00:00:00, BUY EXECUTED, Size 10, Price: 26.11, Cost: 261.10, Commission 0.00 ... ... ... 2000-12-20T00:00:00, SELL CREATE, 26.88 2000-12-21T00:00:00, SELL EXECUTED, Size 10, Price: 26.23, Cost: 262.30, Commission 0.00 2000-12-21T00:00:00, OPERATION PROFIT, GROSS -20.60, NET -20.60 2000-12-21T00:00:00, Close, 27.82 2000-12-21T00:00:00, BUY CREATE, 27.82 2000-12-22T00:00:00, BUY EXECUTED, Size 10, Price: 28.65, Cost: 286.50, Commission 0.00 2000-12-22T00:00:00, Close, 30.06 2000-12-26T00:00:00, Close, 29.17 2000-12-27T00:00:00, Close, 28.94 2000-12-28T00:00:00, Close, 29.29 2000-12-29T00:00:00, Close, 27.41 2000-12-29T00:00:00, SELL CREATE, 27.41 Final Portfolio Value: 973.90 In the name of the King!!! A winning system turned into a losing one … and that with no commission. It may well be that simply adding an indicator is not the universal panacea. Note The same logic and data with PyAlgoTrade yields a slightly different result (slightly off). Looking at the entire printout reveals that some operations are not exactly the same. Being the culprit again the usual suspect: rounding. PyAlgoTrade does not round the datafeed values when applying the divided “adjusted close” to the data feed values. The Yahoo Data Feed provided by backtrader rounds the values down to 2 decimals after applying the adjusted close. Upon printing the values everything seems the same, but it’s obvious that sometimes that 5th place decimal plays a role. Rounding down to 2 decimals seems more realistic, because Market Exchanges do only allow a number of decimals per asset (being that 2 decimals usually for stocks) Note The Yahoo Data Feed (starting with version 1.8.11.99 allows to specify if rounding has to happen and how many decimals) Visual Inspection: Plotting A printout or log of the actual whereabouts of the system at each bar-instant is good but humans tend to be visual and therefore it seems right to offer a view of the same whereabouts as chart. Note To plot you need to have matplotlib installed Once again defaults for plotting are there to assist the platform user. Plotting is incredibly a 1 line operation: cerebro.plot() Being the location for sure after cerebro.run() has been called. In order to display the automatic plotting capabilities and a couple of easy customizations, the following will be done: A 2nd MovingAverage (Exponential) will be added. The defaults will plot it (just like the 1st) with the data. A 3rd MovingAverage (Weighted) will be added. Customized to plot in an own plot (even if not sensible) A Stochastic (Slow) will be added. No change to the defaults. A MACD will be added. No change to the defaults. A RSI will be added. No change to the defaults. A MovingAverage (Simple) will be applied to the RSI. No change to the defaults (it will be plotted with the RSI) An AverageTrueRange will be added. Changed defaults to avoid it being plotted. The entire set of additions to the init method of the Strategy: # Indicators for the plotting show bt.indicators.ExponentialMovingAverage(self.datas[0], period=25) bt.indicators.WeightedMovingAverage(self.datas[0], period=25).subplot = True bt.indicators.StochasticSlow(self.datas[0]) bt.indicators.MACDHisto(self.datas[0]) rsi = bt.indicators.RSI(self.datas[0]) bt.indicators.SmoothedMovingAverage(rsi, period=10) bt.indicators.ATR(self.datas[0]).plot = False Note Even if indicators are not explicitly added to a member variable of the strategy (like self.sma = MovingAverageSimple…), they will autoregister with the strategy and will influence the minimum period for next and will be part of the plotting. In the example only RSI is added to a temporary variable rsi with the only intention to create a MovingAverageSmoothed on it. The example now: from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): params = ( ('maperiod', 15), ) def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close # To keep track of pending orders and buy price/commission self.order = None self.buyprice = None self.buycomm = None # Add a MovingAverageSimple indicator self.sma = bt.indicators.SimpleMovingAverage( self.datas[0], period=self.params.maperiod) # Indicators for the plotting show bt.indicators.ExponentialMovingAverage(self.datas[0], period=25) bt.indicators.WeightedMovingAverage(self.datas[0], period=25, subplot=True) bt.indicators.StochasticSlow(self.datas[0]) bt.indicators.MACDHisto(self.datas[0]) rsi = bt.indicators.RSI(self.datas[0]) bt.indicators.SmoothedMovingAverage(rsi, period=10) bt.indicators.ATR(self.datas[0], plot=False) def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: # Buy/Sell order submitted/accepted to/by broker - Nothing to do return # Check if an order has been completed # Attention: broker could reject order if not enough cash if order.status in [order.Completed]: if order.isbuy(): self.log( 'BUY EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.buyprice = order.executed.price self.buycomm = order.executed.comm else: # Sell self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.bar_executed = len(self) elif order.status in [order.Canceled, order.Margin, order.Rejected]: self.log('Order Canceled/Margin/Rejected') # Write down: no pending order self.order = None def notify_trade(self, trade): if not trade.isclosed: return self.log('OPERATION PROFIT, GROSS %.2f, NET %.2f' % (trade.pnl, trade.pnlcomm)) def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) # Check if an order is pending ... if yes, we cannot send a 2nd one if self.order: return # Check if we are in the market if not self.position: # Not yet ... we MIGHT BUY if ... if self.dataclose[0] > self.sma[0]: # BUY, BUY, BUY!!! (with all possible default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.buy() else: if self.dataclose[0] < self.sma[0]: # SELL, SELL, SELL!!! (with all possible default parameters) self.log('SELL CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.sell() if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(1000.0) # Add a FixedSize sizer according to the stake cerebro.addsizer(bt.sizers.FixedSize, stake=10) # Set the commission cerebro.broker.setcommission(commission=0.0) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Plot the result cerebro.plot() After the execution the output is: Starting Portfolio Value: 1000.00 2000-02-18T00:00:00, Close, 27.61 2000-02-22T00:00:00, Close, 27.97 2000-02-22T00:00:00, BUY CREATE, 27.97 2000-02-23T00:00:00, BUY EXECUTED, Size 10, Price: 28.38, Cost: 283.80, Commission 0.00 2000-02-23T00:00:00, Close, 29.73 ... ... ... 2000-12-21T00:00:00, BUY CREATE, 27.82 2000-12-22T00:00:00, BUY EXECUTED, Size 10, Price: 28.65, Cost: 286.50, Commission 0.00 2000-12-22T00:00:00, Close, 30.06 2000-12-26T00:00:00, Close, 29.17 2000-12-27T00:00:00, Close, 28.94 2000-12-28T00:00:00, Close, 29.29 2000-12-29T00:00:00, Close, 27.41 2000-12-29T00:00:00, SELL CREATE, 27.41 Final Portfolio Value: 981.00 The final result has changed even if the logic hasn’t. This is true but the logic has not been applied to the same number of bars. Note As explained before, the platform will first call next when all indicators are ready to produce a value. In this plotting example (very clear in the chart) the MACD is the last indicator to be fully ready (all 3 lines producing an output). The 1st BUY order is no longer scheduled during Jan 2000 but close to the end of Feb 2000. The chart: image Let’s Optimize Many trading books say each market and each traded stock (or commodity or ..) have different rythms. That there is no such thing as a one size fits all. Before the plotting sample, when the strategy started using an indicator the period default value was 15 bars. It’s a strategy parameter and this can be used in an optimization to change the value of the parameter and see which one better fits the market. Note There is plenty of literature about Optimization and associated pros and cons. But the advice will always point in the same direction: do not overoptimize. If a trading idea is not sound, optimizing may end producing a positive result which is only valid for the backtested dataset. The sample is modified to optimize the period of the Simple Moving Average. For the sake of clarity any output with regards to Buy/Sell orders has been removed The example now: from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime # For datetime objects import os.path # To manage paths import sys # To find out the script name (in argv[0]) # Import the backtrader platform import backtrader as bt # Create a Stratey class TestStrategy(bt.Strategy): params = ( ('maperiod', 15), ('printlog', False), ) def log(self, txt, dt=None, doprint=False): ''' Logging function fot this strategy''' if self.params.printlog or doprint: dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[0].close # To keep track of pending orders and buy price/commission self.order = None self.buyprice = None self.buycomm = None # Add a MovingAverageSimple indicator self.sma = bt.indicators.SimpleMovingAverage( self.datas[0], period=self.params.maperiod) def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: # Buy/Sell order submitted/accepted to/by broker - Nothing to do return # Check if an order has been completed # Attention: broker could reject order if not enough cash if order.status in [order.Completed]: if order.isbuy(): self.log( 'BUY EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.buyprice = order.executed.price self.buycomm = order.executed.comm else: # Sell self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' % (order.executed.price, order.executed.value, order.executed.comm)) self.bar_executed = len(self) elif order.status in [order.Canceled, order.Margin, order.Rejected]: self.log('Order Canceled/Margin/Rejected') # Write down: no pending order self.order = None def notify_trade(self, trade): if not trade.isclosed: return self.log('OPERATION PROFIT, GROSS %.2f, NET %.2f' % (trade.pnl, trade.pnlcomm)) def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) # Check if an order is pending ... if yes, we cannot send a 2nd one if self.order: return # Check if we are in the market if not self.position: # Not yet ... we MIGHT BUY if ... if self.dataclose[0] > self.sma[0]: # BUY, BUY, BUY!!! (with all possible default parameters) self.log('BUY CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.buy() else: if self.dataclose[0] < self.sma[0]: # SELL, SELL, SELL!!! (with all possible default parameters) self.log('SELL CREATE, %.2f' % self.dataclose[0]) # Keep track of the created order to avoid a 2nd order self.order = self.sell() def stop(self): self.log('(MA Period %2d) Ending Value %.2f' % (self.params.maperiod, self.broker.getvalue()), doprint=True) if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy strats = cerebro.optstrategy( TestStrategy, maperiod=range(10, 31)) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere modpath = os.path.dirname(os.path.abspath(sys.argv[0])) datapath = os.path.join(modpath, '../../datas/orcl-1995-2014.txt') # Create a Data Feed data = bt.feeds.YahooFinanceCSVData( dataname=datapath, # Do not pass values before this date fromdate=datetime.datetime(2000, 1, 1), # Do not pass values before this date todate=datetime.datetime(2000, 12, 31), # Do not pass values after this date reverse=False) # Add the Data Feed to Cerebro cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(1000.0) # Add a FixedSize sizer according to the stake cerebro.addsizer(bt.sizers.FixedSize, stake=10) # Set the commission cerebro.broker.setcommission(commission=0.0) # Run over everything cerebro.run(maxcpus=1) Instead of calling addstrategy to add a stratey class to Cerebro, the call is made to optstrategy. And instead of passing a value a range of values is passed. One of the “Strategy” hooks is added, the stop method, which will be called when the data has been exhausted and backtesting is over. It’s used to print the final net value of the portfolio in the broker (it was done in Cerebro previously) The system will execute the strategy for each value of the range. The following will be output: 2000-12-29, (MA Period 10) Ending Value 880.30 2000-12-29, (MA Period 11) Ending Value 880.00 2000-12-29, (MA Period 12) Ending Value 830.30 2000-12-29, (MA Period 13) Ending Value 893.90 2000-12-29, (MA Period 14) Ending Value 896.90 2000-12-29, (MA Period 15) Ending Value 973.90 2000-12-29, (MA Period 16) Ending Value 959.40 2000-12-29, (MA Period 17) Ending Value 949.80 2000-12-29, (MA Period 18) Ending Value 1011.90 2000-12-29, (MA Period 19) Ending Value 1041.90 2000-12-29, (MA Period 20) Ending Value 1078.00 2000-12-29, (MA Period 21) Ending Value 1058.80 2000-12-29, (MA Period 22) Ending Value 1061.50 2000-12-29, (MA Period 23) Ending Value 1023.00 2000-12-29, (MA Period 24) Ending Value 1020.10 2000-12-29, (MA Period 25) Ending Value 1013.30 2000-12-29, (MA Period 26) Ending Value 998.30 2000-12-29, (MA Period 27) Ending Value 982.20 2000-12-29, (MA Period 28) Ending Value 975.70 2000-12-29, (MA Period 29) Ending Value 983.30 2000-12-29, (MA Period 30) Ending Value 979.80 Results: For periods below 18 the strategy (commissionless) loses money. For periods between 18 and 26 (both included) the strategy makes money. Above 26 money is lost again. And the winning period for this strategy and the given data set is: 20 bars, which wins 78.00 units over 1000 $/€ (a 7.8%) Note The extra indicators from the plotting example have been removed and the start of operations is only influenced by the Simple Moving Average which is being optimized. Hence the slightly different results for period 15 Conclusion The incremental samples have shown how to go from a barebones script to a fully working trading system which even plots the results and can be optimized. A lot more can be done to try to improve the chances of winning: Self defined Indicators Creating an indicator is easy (and even plotting them is easy) Sizers Money Management is for many the key to success Order Types (limit, stop, stoplimit) Some others To ensure all the above items can be fully utilized the documentation provides an insight into them (and other topics) Look in the table of contents and keep on reading … and developing. Best of luck
07-08
提供了一个基于51单片机的RFID门禁系统的完整资源文件,包括PCB图、原理图、论文以及源程序。该系统设计由单片机、RFID-RC522频射卡模块、LCD显示、灯控电路、蜂鸣器报警电路、存储模块和按键组成。系统支持通过密码和刷卡两种方式进行门禁控制,灯亮表示开门成功,蜂鸣器响表示开门失败。 资源内容 PCB图:包含系统的PCB设计图,方便用户进行硬件电路的制作和调试。 原理图:详细展示了系统的电路连接和模块布局,帮助用户理解系统的工作原理。 论文:提供了系统的详细设计思路、实现方法以及测试结果,适合学习和研究使用。 源程序:包含系统的全部源代码,用户可以根据需要进行修改和优化。 系统功能 刷卡开门:用户可以通过刷RFID卡进行门禁控制,系统会自动识别卡片并判断是否允许开门。 密码开门:用户可以通过输入预设密码进行门禁控制,系统会验证密码的正确性。 状态显示:系统通过LCD显示屏显示当前状态,如刷卡成功、密码错误等。 灯光提示:灯亮表示开门成功,灯灭表示开门失败或未操作。 蜂鸣器报警:当刷卡或密码输入错误时,蜂鸣器会发出报警声,提示用户操作失败。 适用人群 电子工程、自动化等相关专业的学生和研究人员。 对单片机和RFID技术感兴趣的爱好者。 需要开发类似门禁系统的工程师和开发者。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值