frenetic-lang / netcore-1.0 Goto Github PK
View Code? Open in Web Editor NEWCompiler from NetCore to OpenFlow and associated tools.
License: BSD 3-Clause "New" or "Revised" License
Compiler from NetCore to OpenFlow and associated tools.
License: BSD 3-Clause "New" or "Revised" License
The slice compiler thought that FLOOD meant forward to every port, but it actually means forward out every port but the one it came in.
This touches the compiler and the SAT verification. A solution needs to:
Personally, I'm a fan of BSD3. Apache is often preferred by industry since it grants license to patents, etc.
IMO, it should be a part of the located packet (Transmission) tuple.
Nettle dies really badly when it sees an IPv6 packet ("unknown ethernet frame"). We need to have a workaround so that things don't break.
Disabling IPv6 fixes it, but that's not really a good option for shipping. Shipping a forked nettle might be the solution...
When I was originally working on this compiler, we had come up with an mechanism of "unresolved" or "indeterminate" variables in policies to allow higher level inspectors to enjoy incremental compilation (instead of regenerating the entire policy over and over again). We should collectively try to remember the consensus and come up with a plan for implementation.
Feel free to assign this task to me ATM.
Alec and I will review this.
Also for virtualization, see #37.
This is post-SIGCOMM.
Examples should log things so we have demos to show people.
For virtualization, composition of NetCore policies can be useful. This is just a reminder issue until I finish writing up the compilation algorithm for the eternally delayed e-mail that I will send out at some point...
Lots of policies generated by the slice compiler contain large subtrees that are empty. We should find empty intersections of switches and headers and trim them out. A first pass at this is already in Frenetic/NetCore/Reduce.
Some modifications cannot be deployed to OpenFlow 1.0 switches. For example, consider the following policy:
dlTyp 0x0800 ==> modify [(1, nwSrc 10.0.0.100), (2, nwDst 10.0.0.0)]
This policy matches all IP traffic, modifies the source IP to be 10.0.0.100 and forwards the result out port 1, and modifies the destination IP to be 10.0.0.0 and forwards the result out port 2.
In general, it's not possible to apply a modification, forward the result, then undo the modification and apply a different one (as in the policy above). However, imagine we see some packet with nwSrc == 192.168.1.1. If we match it exactly, we can do the following:
(dlTyp 0x0800 <&&> nwSrc 192.168.1.1 ==> modify [(1, nwSrc 10.0.0.100), (2, nwSrc 192.168.1.1 <+> nwDst 10.0.0.0)]) <+>
dlTyp 0x0800 ==> modify [(1, nwSrc 10.0.0.100), (2, nwDst 10.0.0.0)]
Because we match the source IP exactly, we can add a modification to reinstate it, essentially undoing the previous modification.
(Note that this example assumes that the modification/forward actions are done in order.)
We can use reactive specialization to install the specialized rules as new flows are matched.
We should be able to get mininet tests working easily on arbitrary new machines. Right now, they only work consistently on Arjun's machine.
Names, modules, exports, etc.
Let's write a script that fires up mininet, the controller, and terminals with directions.
This is very last issue we will close before release.
putStrLn
strace
sI'm open to suggestions.
E.g., "DoNothing", "IfMatch", and "Both"
For releases, we should be compiling with one of the -O flags. Older versions of GHC don't do much optimization by default, I don't know if the latest version does. The performance can go up by orders of magnitude due to strictness analysis and fusion.
I'm not a cabal expert; can we have different build profiles?
How do y'all do this? I've always just alphabetized:
We've been conflating the OF FLOOD action with the OF ALL action. FLOOD is an optional action that sends out every port designated as a spanning tree port except the inport. ALL is a required action that sends out every port except the inport. Right now we use FLOOD when we really want ALL.
Don't do this until Cole finishes NAT hackery.
I had an image for an older version. I don't know if it is still useful, but I can try to dig up a copy.
We should have a Channel (Loc, Packet) that sends packets out of that port. This is probably a prereq for doing lots of interesting controller-side applications like ARP/DHCP.
The learning switch already supports conventional ARP through providing basic ethernet connectivity. We should build a module that uses the controller as an ARP cache that would help prevent spoofing because we can answer ARP requests directly from the server.
Eventually, this should integrate with a DHCP module which would prevent spoofing altogether.
Queries allocate sequential IDs that are use in both SAT testing and to make them a member of Ord. If IDs are not unique, bad things happen, but we don't currently assign them in a threadsafe way, so they could be non-unique.
Some compiled rules generate suboptimal classifiers. For example, the following predicate:
[And ({},Not (And ({DlTyp = 2054},{NwProto = 1},Not ({})))) ==> {AllPorts} emit {} ,And (Not ({DlDst = ff:ff:ff:ff:ff:ff}) ,Not (And ({DlTyp = 2054},{NwProto = 1},Not ({})))) ==> {} emit {0} ,And ({DlTyp = 2054},{NwProto = 1},Not ({})) ==> {} emit {1} ,And ({DlTyp = 2054},{NwProto = 2},Not (Not ({}))) ==> {} emit {2}]
compiles to this classifier:
(Match {dstEthAddress = "EthernetAddress 281474976710655", ethFrameType = "2054", matchIPProtocol = "2"}, [SendOutPort Flood,SendOutPort (ToController 65535)]) (Match {ethFrameType = "2054", matchIPProtocol = "2"}, [SendOutPort Flood,SendOutPort (ToController 65535)]) (Match {dstEthAddress = "EthernetAddress 281474976710655", ethFrameType = "2054", matchIPProtocol = "1"}, [SendOutPort Flood]) (Match {ethFrameType = "2054", matchIPProtocol = "1"}, [SendOutPort Flood,SendOutPort (ToController 65535)]) (Match {dstEthAddress = "EthernetAddress 281474976710655"}, [SendOutPort Flood]) (Match {}, [SendOutPort Flood,SendOutPort (ToController 65535)])
But the first rule is a total subset of the second. We shouldn't have two rules where one suffices.
We translate the surface syntax, defines in Types.hs, to an internal AST:
https://github.com/frenetic-lang/netcore/blob/master/src/Frenetic/NetCore/Semantics.hs#L308
The semantics and compiler are now defined over this internal AST, and not the surface syntax. We should also update the Slice compiler to use it.
When complete, we'll be able to remove query IDs from the surface syntax: they're a usability wart.
This should be straightforward to do, since the type of predicates is the same.
I'd be happy to hack this up, but Alec might be quicker.
The current learning switch test simply ensures reachability. We should also ensure that correct forwarding rules are installed on switches.
Which is it?
To ease development, we've been statically linking to the NetCore sources. For release, we should test linking with the frenetic cabal package.
The ethernet standard specifies FF:FF:FF:FF:FF:FF as a special destination MAC address that forwards to all hosts on the LAN. The learning switch should support this directly. To do this:
Currently, we're working on standardizing how we represent network topologies internally for implementing more sophisticated policies.
TopoParser is intended to provide means of picking up topologies from various different sources. Currently, parseTopo is the primary function for taking a topology represented as a string in mininet's format (i.e. what you get when you run the net command in mininet) and returns it into something which we can build into a topology as defined in Topo.hs. See TopoSample for an example of this.
We want to extend this to pick up the topology from mininet directly so when a controller fires up, it can pick up a representation of the topology being simulated in mininet for building policies. This amounts to adding some function to TopoParser with pattern: ?? -> String where ?? is whatever is needed to pick up the output of the net command from mininet.
Currently, if you run testMakeTop in TopoSample.hs (in examples) you will build a Topo instance from a string of the format mininet spits out. The proposed extension would allow us to write an alternative to testParse which would pick up a topology from a running instance of mininet directly, rather than the hard-coded string which was copied and pasted from mininet.
One place that could take advantage of this change is in BaseMon which currently picks up a topology from a string and the parser.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.