A network policy is a specification of how groups of podsA Pod represents a set of running containers in your cluster. are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labelsTags objects with identifying attributes that are meaningful and relevant to users. to select pods and define rules which specify what traffic is allowed to the selected pods.
Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
Isolated and Non-isolated Pods
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
The NetworkPolicy resource
See the NetworkPolicy reference for a full definition of the resource.
An example NetworkPolicy might look like this:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978
Note: POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy.
Mandatory Fields: As with all other Kubernetes config, a NetworkPolicy
metadata fields. For general information
about working with config files, see
Configure Containers Using a ConfigMap,
and Object Management.
spec: NetworkPolicy spec has all the information needed to define a particular network policy in the given namespace.
podSelector: Each NetworkPolicy includes a
podSelector which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty
podSelector selects all pods in the namespace.
policyTypes: Each NetworkPolicy includes a
policyTypes list which may include either
Egress, or both. The
policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no
policyTypes are specified on a NetworkPolicy then by default
Ingress will always be set and
Egress will be set if the NetworkPolicy has any egress rules.
ingress: Each NetworkPolicy may include a list of allowed
ingress rules. Each rule allows traffic which matches both the
ports sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an
ipBlock, the second via a
namespaceSelector and the third via a
egress: Each NetworkPolicy may include a list of allowed
egress rules. Each rule allows traffic which matches both the
ports sections. The example policy contains a single rule, which matches traffic on a single port to any destination in
So, the example NetworkPolicy:
isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated)
(Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
- any pod in the "default" namespace with the label "role=frontend"
- any pod in a namespace with the label "project=myproject"
- IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
(Egress rules) allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978
See the Declare Network Policy walkthrough for further examples.
There are four kinds of selectors that can be specified in an
from section or
podSelector: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations.
namespaceSelector: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations.
namespaceSelector and podSelector: A single
from entry that specifies both
podSelector selects particular Pods within particular namespaces. Be careful to use correct YAML syntax; this policy:
... ingress: - from: - namespaceSelector: matchLabels: user: alice podSelector: matchLabels: role: client ...
contains a single
from element allowing connections from Pods with the label
role=client in namespaces with the label
user=alice. But this policy:
... ingress: - from: - namespaceSelector: matchLabels: user: alice - podSelector: matchLabels: role: client ...
contains two elements in the
from array, and allows connections from Pods in the local Namespace with the label
role=client, or from any Pod in any namespace with the label
When in doubt, use
kubectl describe to see how Kubernetes has interpreted the policy.
ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
Cluster ingress and egress mechanisms often require rewriting the source or destination IP
of packets. In cases where this happens, it is not defined whether this happens before or
after NetworkPolicy processing, and the behavior may be different for different
combinations of network plugin, cloud provider,
Service implementation, etc.
In the case of ingress, this means that in some cases you may be able to filter incoming
packets based on the actual original source IP, while in other cases, the "source IP" that
the NetworkPolicy acts on may be the IP of a
LoadBalancer or of the Pod's node, etc.
For egress, this means that connections from pods to
Service IPs that get rewritten to
cluster-external IPs may or may not be subject to
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior in that namespace.
Default deny all ingress traffic
You can create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods.
This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated. This policy does not change the default egress isolation behavior.
Default allow all ingress traffic
If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all traffic in that namespace.
Default deny all egress traffic
You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods.
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the default ingress isolation behavior.
Default allow all egress traffic
If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all egress traffic in that namespace.
Default deny all ingress and all egress traffic
You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace.
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic.
Kubernetes v1.12 [alpha]
To use this feature, you (or your cluster administrator) will need to enable the
SCTPSupport feature gate for the API server with
When the feature gate is enabled, you can set the
protocol field of a NetworkPolicy to
Note: You must be using a CNIContainer network interface (CNI) plugins are a type of Network plugin that adheres to the appc/CNI specification. plugin that supports SCTP protocol NetworkPolicies.