Showing posts with label online test. Show all posts
Showing posts with label online test. Show all posts

Friday, 3 March 2017

How ITIL Differentiates Problems and Incidents

Students in ITIL® Foundation classes often find it challenging to differentiate between incidents and problems. To address this issue and offer clarification, this blog will identify the differences between incidents and problems, how they are related, and why it matters.

What is an Incident?

According to ITIL, an incident is an unplanned interruption to a service or a degradation in the quality of a service. What often determines the classification of something as an incident is whether or not the service level agreement (SLA) was breached. However, ITIL allows for raising an incident even before an SLA has been breached in order to limit or prevent impact.

In layman’s terms, an incident is the representation of an outage.

What is a Problem?

According to ITIL, a problem is the root cause of one or more incidents. Problems can be raised in response to one or more incidents, or they can be raised without the existence of a corresponding incident.

In layman’s terms, a problem is the representation of the cause or potential cause or one or more outages.

What is the Relationship Between Incidents and Problems?

Generally speaking, the relationship between the two is that one problem is the cause of one or more incidents. However, it is possible to have an incident (or group of incidents) that is caused by more than one problem.

Why Does ITIL Distinguish Between Incidents and Problems?

The point of distinguishing between incidents and problems is the same as separating cause and effect. Problems are the cause, and incidents are the effect.

ITIL encourages organizations to distinguish between these things because the two are often treated and resolved differently. Addressing an incident simply means that whatever service was impacted has been temporarily restored. It does not mean that the incident will not recur at some time in the future. When I say “temporarily,” keep in mind that could mean one minute or 10 years. The point is that a resolution to an incident is not permanent.

Problems, however, are the cause of incidents. We might use different techniques to identify the root cause of a problem and ultimately resolve that problem. When a resolution occurs, change management is invoked because addressing root causes often entails some amount of risk.

Effective incident management ensures that as a service provider you are able to keep the promises you made in your SLAs by providing a mechanism to quickly restore service when it’s necessary. Problem management ensures that as a service provider you are able to reactively respond to incidents so that they don’t recur and proactively prevent incidents from happening.

These are separate processes because they often require different skill sets and activities. Incident management wants to quickly restore service in line with any SLAs that are in place whereas problem management wants to eliminate the root causes of incidents. Sometimes to properly address a problem, a service provider must cause or extend an existing outage.

Our Solution

Students apply problem management process and techniques based on their specific workplace experiences in our Mastering Problem Management course. This non-certification, exercise-driven learning approach gives learners the tools they need for real world ITIL application.



from
CERTIVIEW

Thursday, 2 March 2017

How to Troubleshoot Cisco’s Dynamic Multipoint VPN (DMVPN)

Dynamic Multipoint Virtual Private Network (DMVPN) is a network solution for those that have many sites that need access to either a hub site or to each other. It was designed by Cisco to help reduce the complexities in configuring and supporting a full mesh of VPNs between sites. There are other vendors that now support DMVPN, but Cisco is where it started.

Benefits of using DMVPN

The dynamic component of DMVPN is that a portion of the VPNs may not have to be pre-configured on all end points of the VPNs. DMVPN allows for the possibility of dynamic spoke-to-spoke communication, once the spokes have made contact with the hub or hubs.

It was intended to be used in a hub-and-spoke configuration (with the possibility of redundant hubs). DMVPN is based on RFC-based solutions: Generic Routing Encapsulation (GRE RFC 1701), Next Hop Resolution Protocol (NHRP RFC 2332) and Internet Protocol Security (IPSec, there are multiple RFCs and standards).

The main idea is to reduce the configuration on the hub(s) router and push some of the burden onto the spoke routers. Using the NHRP to register the spokes to the hub, the spoke can then use the hub as a resolution server to be able to build dynamic tunnels to other spokes.

What if it doesn’t work?

There are several moving parts here to look at: base configuration of the tunnel interfaces (and the basic connectivity), the registration of the spokes to the hub(s), and IPSec.

First off, the tunnel interfaces have to be configured with a source interface or address to create the tunnel, known as the public address relative to DMVPN. The addressing can be either IPv4 or IPv6, but the addressing for the source of the tunnel interfaces must be reachable by the other routers. Whether it’s spoke to hub or hub to spoke, using a ping or traceroute is the best way to verify connectivity.

The configuration may require IPSec, but try the tunnels without it. Ping the tunnel interface address, known as the private address. If the tunnels work without IPSec but don’t work with it, jump to troubleshooting IPSec. If the tunnels aren’t able to pass traffic without IPSec, then start looking at the basic configuration of the tunnel and the next hop resolution protocol.

The configuration of the hub is minimal, typically, relative to NHRP; most is done on the spoke routers. Make sure that mapping states are correct—ip(v6) nhrp map “private-address” “public-address.” Also, make sure the next hop server command is pointing to the public address of the hub—ip(v6) nhrp hns “public-address.”

interface Tunnel123
 ip address 192.168.123.1 255.255.255.0
 ip nhrp map 192.168.123.2 10.1.1.2
 ip nhrp map multicast 10.1.1.2
 ip nhrp network-id 1
 ip nhrp nhs 192.168.123.2
 tunnel source FastEthernet0/0
 tunnel mode gre multipoint

In this example, the 192.168.123.0/24 address space is the private addressing and 10.1.1.x is the public.

The router will let you misconfigure these commands with the incorrect addresses with no errors. You can use the show ip nhrp nhs detail to check if the spoke-to-server request is successful.

R1#show ip nhrp nhs detail
Legend: E=Expecting replies, R=Responding, W=Waiting
Tunnel123:
192.168.123.2  RE priority = 0 cluster = 0  req-sent 2  req-failed 0  repl-recv 2 (00:03:13 ago)

On the next hop server, you can verify that the registration was successful with the show ip nhrp command.

R2#show ip nhrp
192.168.123.1/32 via 192.168.123.1
   Tunnel123 created 00:04:30, expire 01:55:29
   Type: dynamic, Flags: unique registered
   NBMA address: 10.1.1.1
192.168.123.3/32 via 192.168.123.3
   Tunnel123 created 00:04:30, expire 01:55:29
   Type: dynamic, Flags: unique registered
   NBMA address: 10.1.1.3

If the intent is to allow the spoke to dynamically form tunnels, but they aren’t formed, check to ensure the shortcut forwarding is enabled on the spokes in question. The interface command ip nhrp shortcut is needed to enable this shortcut forwarding. Try doing a traceroute from one spoke to another and see if the hub shows up as an intermediate hop. If so, then shortcut forwarding is not enabled.

Another consideration with DMVPN is the registration process. This process is initiated by the spokes, not the hub. If the hub router reloads or if the tunnel interface goes down and comes back up, you may have to shut down or “no shutdown” the spoke routers interfaces. This issue has been resolved in 15.2 code and anything more recent.

If communication works without IPSec, but doesn’t with IPSec configured, it’s time to troubleshoot the IPSec configuration. The policies for phase 1 (key exchange) and phase 2 (transformation of the data) have to be the same between the hub router(s) and spokes. There can be different policies for specific spokes, but that would require different tunnel interfaces.  Check the key exchange by using the show crypto isakmp policy command on the routers in question.

R1#sh crypto isakmp policy

Global IKE policy
Protection suite of priority 10
        encryption algorithm:   Three key triple DES
        hash algorithm:         Secure Hash Standard 2 (256 bit)
        authentication method:  Pre-Shared Key
        Diffie-Hellman group:   #2 (1024 bit)
        lifetime:               86400 seconds, no volume limit

To verify that phase 1 is successful, use the show crypto isakmp sa command.

R1#sh crypto isakmp sa detail
Codes: C - IKE configuration mode, D - Dead Peer Detection
       K - Keepalives, N - NAT-traversal
       T - cTCP encapsulation, X - IKE Extended Authentication
       psk - Preshared key, rsig - RSA signature
       renc - RSA encryption
IPv4 Crypto ISAKMP SA

C-id  Local           Remote          I-VRF  Status Encr Hash   Auth DH Lifetime Cap.

1002  10.1.1.1        10.1.1.3               ACTIVE 3des sha256 psk  2  23:58:43
       Engine-id:Conn-id =  SW:2

1001  10.1.1.1        10.1.1.2               ACTIVE 3des sha256 psk  2  23:58:42
       Engine-id:Conn-id =  SW:1

IPv6 Crypto ISAKMP SA

If it looks like phase 1, check that the transform sets are consistent by comparing the output of the show crypto ipsec transform-set command on the hub and spoke routers.

R1#show crypto ipsec transform-set
Transform set default: { esp-aes esp-sha-hmac  }
   will negotiate = { Transport,  },

Transform set MyTS: { ah-sha256-hmac  }
   will negotiate = { Tunnel,  },
   { esp-3des  }
   will negotiate = { Tunnel,  },

To verify that the IPSec negotiation was successful, use the show crypto ipsec sa command. This can show you the packets that are being sent and whether they’re encrypted or not.

R1#sh crypto ipsec sa

interface: Tunnel123
    Crypto map tag: Tunnel123-head-0, local addr 10.1.1.1

protected vrf: (none)
   local  ident (addr/mask/prot/port): (10.1.1.1/255.255.255.255/47/0)
   remote ident (addr/mask/prot/port): (10.1.1.2/255.255.255.255/47/0)
   current_peer 10.1.1.2 port 500
     PERMIT, flags={origin_is_acl,}
    #pkts encaps: 55, #pkts encrypt: 55, #pkts digest: 55
    #pkts decaps: 54, #pkts decrypt: 54, #pkts verify: 54
    #pkts compressed: 0, #pkts decompressed: 0
    #pkts not compressed: 0, #pkts compr. failed: 0
    #pkts not decompressed: 0, #pkts decompress failed: 0
    #send errors 0, #recv errors 0

     local crypto endpt.: 10.1.1.1, remote crypto endpt.: 10.1.1.2
     path mtu 1500, ip mtu 1500, ip mtu idb (none)
     current outbound spi: 0x51F10868(1374750824)
     PFS (Y/N): N, DH group: none

     inbound esp sas:
      spi: 0x59A9D043(1504301123)
 —
- output omitted -

For troubleshooting DMVPN issues, the best thing is to break it down to its components—basic connectivity, basic tunnel function and then security.
For more DMVPN troubleshooting options, visit http://ift.tt/2lxo1gR.

Related Courses
CIERS1 – Cisco Expert-Level Training for CCIE Routing and Switching v5.0
CIERS2 – Cisco Expert-Level Training for CCIE Routing and Switching Advanced Workshop 2 v5.0



from
CERTIVIEW

Sunday, 19 February 2017

How to Reach Devices in Other Domains with IGP Route Redistribution

One size does not always fit all.

At times there’s a need to run more than one routing protocol and have more than one routing domain: multivendor shops, migration from one protocol to another, scalability issues of a single protocol, political or personal preference, production versus test networks, mergers, and acquisitions.

Redistribution is the process of passing routing information from one routing protocol to another to have reachability for devices that live in different routing domains. Each routing protocol will contribute unique information into the routing tables within its domain, but there can be a desire or need to reach devices in another domain. Redistribution is done on one or more boundary routers between a source routing domain or protocol into a target domain or protocol.

There are three options to get full reachability between domains:

  1. Default routes from a boundary router. You can pass a default route from a router that touches all routing domains (boundary router) to those routers that only participate within one domain (internal router). That would cover unknown routes from any domain that the internal routers are unaware of and have the internal routers forward to the boundary router, which would have a complete routing table since it would be participating in all the routing domains. This process works best if there is only one point of contact between the routing domains.
  2. One-way redistribution, with a default. One or more boundary routers pass a default route into one domain, but redistribute into another domain. Typically, you would pick a core protocol to redistribute into and the other protocols that get the default route would be looked at as edge protocols. One-way redistribution is used to scale up to larger numbers of routes, such as in a large multinational company. The core protocol could be BGP (Border Gateway Protocol) and the edge protocol(s) could be any IGP (Interior Gateway Protocol), such as OSPF, EIGRP, RIP or IS-IS, or even multiple instances of the same IGP. It works well with acquisitions and mergers, since the “new” part of the company doesn’t have to be running the same routing protocol as the rest, or have to change in the short term. It just adds a connection to the core.
  3. Two-way or mutual redistribution passes some or all of the routing information of one protocol into another. This is the most complex option, especially if there is more than one point of contact between the routing domains. It should be used when there are destinations that need to be reachable from one domain to another. But specific policy has to be applied to show how the traffic reaches those destinations or how the traffic has to be treated based on security policies. The common concerns with two-way distribution are routing loops, asymmetric routing and suboptimal routing.

a. Asymmetric routing is where the forwarding path is different than the return path. Issues can arise if there’s a security policy in place for how traffic is forwarded or if there are a set of firewalls in place. Load balancers can also be disrupted by asymmetric routing. Load balancers, which distribute load to specific devices based on a shared address, expect the forwarding and return paths to be predictable.

b. Suboptimal routing is where the most preferred path in a forwarding table is not really the most direct route. This happens when the boundary router “hears” about routes from the originating protocol and also through another routing protocol as an external route. If the administrative distance for the external route protocol is more trustworthy than the originating protocol, the router will prefer the external route over the native route. The fix is to manipulate the administrative distance of the routes in question. This is not the most straight-forward process and varies from platform to platform, even within a single vendor’s product line.

c. Routing loops, or a feedback loop, can occur when the routing information is redistributed into one protocol at one point of contact and then redistributed back into the originating protocol at another point of contact. In order to fix a routing loop, you have to create a feedback filter. The filter would deny the routes originating in the target protocol from being advertised back into that same protocol. You have to build the filter for direction. For example, if you have OSPF and EIGRP domains connected at two or more points, you would build one filter for all the OSPF routes and filter them on the redistribution from EIGRP into OSPF.  Another filter would be built for all the EIGRP routes, filtering them on the redistribution from OSPF into EIGRP. This filtering has to be done on all boundary routers between the two protocols to be effective. You can match on the prefixes or you can use tags, which is my preference. The tags can be assigned as part of the process of redistributing the routes into the target protocol, then you can look for the tags to filter on. You have to create a policy that first looks for the tags and denies them and if they’re not there, then tags the routes to identify the source protocol. This is done for direction, so two polices would have to be created.  All routing protocols, including RIPv2, can support tags.

Let’s look at IGP route redistribution on Cisco devices. When redistributing from one protocol into another, there are a few things to remember:

  • The redistribution process pulls from the routing table, not the protocols database. If you are going to redistribute RIP into OSPF, then the process looks for those routes labeled as RIP in the routing table. There is one exception: the connected routes that the protocol is running on.
  • On the Cisco routers for IPv4, the connected routes will automatically redistribute as well. This is true as long as you don’t redistribute when connected into that same target protocol, which would cause the feature to stop.
  • On the Cisco routers for IPv6, the redistribution process does not redistribute those connected routes that the protocol is running on, unless you add the include-connected option on the redistribution line.

Some Cisco OS requires a policy applied to the redistribution command for routes to be passed from one protocol to another. When redistributing into a protocol, you have to supply metrics for the routes so they’re in the correct format relative to the target protocol. The metric for one protocol doesn’t necessarily look correct for another. There is a seed metric that has to be tied to the external routes going into the target protocol. The table in Figure 1 displays each protocol with some slight variations.

Source into RIP into EIGRP into OSPF into IS-IS into BGP (MED)
Connected 1 Interface metric 20 (E2) 0 0
Static 1 Interface metric 20 (E2) 0 0
RIP Infinite 20 (E2) 0 IGP metric
EIGRP Infinite Other process metric 20 (E2) 0 IGP metric
OSPF Infinite Infinite 0 IGP metric
IS-IS Infinite Infinite 20 (E2) IGP metric
BGP Infinite Infinite 1 (E2) 0

Figure 1: Protocol Variations

If the seed metric is infinite, the route is not useable. You have to supply the seed metric when redistributing the source protocol into the target either on the redistribution line or through the default metric command under the target routing protocol. The seed metric is in the format for that target protocol: hops for RIP, cost for OSPF and IS-IS, and the composite metric for EIGRP (bandwidth, delay, reliability, load and MTU).

One last consideration—if the source is BGP, then only the external BGP routes will be redistributed into the IGP. This is a loop prevention mechanism. If you need to redistribute the internal BGP routes, then configure under the BGP process (not the target protocol) bgp redistribute-internal command.

So, if you’re running more than one routing protocol and you need full or partial reachability, you’ll have to redistribute between those protocols. There are a few things to consider and plan for before you start configuring. Redistribution can be very simple (one pair of protocols, one point of contact) and can be very complex.

Related Courses
ROUTE – Implementing Cisco IP Routing v2.0
CIERS1 – Cisco Expert-Level Training for CCIE Routing and Switching v5.0
SPROUTE – Deploying Cisco Service Provider Network Routing



from
CERTIVIEW

Sunday, 12 February 2017

A Simple Formula to Keep Projects on Time

For many years, science has proven that we can only do one thing at a time. More specifically, we can attend to only one cognitive task and process only one mental activity at a time: we can either talk or read but not do both at the same time. We can only have one thought at a time, and the more we force ourselves to switch from one thing to another, the more we tax our mental faculties.

In 2001, Joshua Rubinstein, Ph.D., Jeffrey Evans, Ph.D., and David Meyer, Ph.D., conducted and published four experiments in which young adults switched between different tasks, such as solving math problems or classifying geometric objects. The findings for all tasks revealed:

  • The participants lost time when they had to switch from one task to another.
  • As tasks became more complex, the participants performing them lost more time.
  • As a result, people took significantly longer to switch between increasingly complex tasks.
  • Time costs were greater when the participants switched to tasks that were relatively unfamiliar.

It is important to pull ourselves away from work now and then; in fact, I teach my students a rhythm of 25 minutes to task, followed by a five-minute break, followed by another 25 minutes (repeat until the task is complete). Breaks are one thing, but distractions are another. Breaks are short, focused and deliberate. Distractions catch us off guard and derail our task entirely.

Meyer has said that even brief mental blocks created by shifting between tasks can cost as much as 40 percent of someone’s productive time. It seems unlikely that we will be able to refocus company culture to accept the virtues of scheduling and completing one task before starting another.

If this behavior is inevitable and unpredictable, what can we do to keep our projects on time? Perhaps the answer is a single point estimate using the resources’ availability and productivity.

Single Point Resource Availability and Productivity Technique

Estimating time in projects is typically done using a single point estimate derived from experience and best guess. At best, a single point estimate gives us a 50 percent probability of success. We can increase those odds a few percentages by accounting for both the availability of a resource and their average productivity, which we have learned through studies is between 72 to 74 percent. I will use a constant of 70 percent for simplicity.

In Equation 1 that follows, d represents duration, e represents the amount of effort needed to complete the task, a represents the resource’s availability and p represents the average productivity of a typical knowledge worker. Equation 2 solves for d.

d = \frac{e}{a}/p
Equation 1. Single Point Estimate Using Availability and Productivity

Example

d = \frac{10h}{50\%}/70\%

d = 20h/70\% = 26\:hours
Equation 2. Single Point Estimate Using Availability and Productivity Example

Strengths of this technique

This technique is helpful when the hours given come with high confidence and the work is fairly routine and easy to calculate.

Weaknesses of this technique

We all know how difficult it is for most people to articulate, with any predictability, the accuracy of their timelines. Multitasking, or task-switching, is rampant in our fast-paced culture. Since this work ethic is not likely to change, we must use probabilistic math.

There are many ways to estimate time and some are more accurate than others. All come with uncertainty. We have a tendency to base our project estimates on the best guess, which makes it difficult to plan for the inherent randomness in these estimates. At best, a single point quote would only give us a 50 percent likelihood of success. We may increase those odds by looking at a best and worst estimate, plus the resource’s general availability and productivity.

You can learn about other project time estimation techniques in my white paper, A Toolkit for Project Time Estimation.



from
CERTIVIEW

Tuesday, 7 February 2017

Know Your Options Before Selecting a Routing Protocol

Routers and switches make up the bulk of the network infrastructure and are vulnerable to attack. In a previous article, I talked about some of the different ways of hardening your network devices. In this blog, I’d like to specifically examine the routing protocols used on the major Cisco network operating systems.

All the routing protocols have the option to authenticate the neighbors (other routers) and the routing updates that are being received. Except for Open Shortest Path First version 3 (OSPFv3), all routing protocols have built-in methods to authenticate their peers to confirm the updates are coming from a trusted source. OSPFv3 uses IPv6 native IPsec support to authenticate and/or encrypt the OSPFv3 packets and, therefore, has the strongest security options of any of the routing protocols.

Depending on the version of code and protocol, there are various options for authentication of the routing protocols.

With Routing Information Protocol version 2 (RIPv2), OSPFv2 and Intermediate System to Intermediate System (IS-IS), clear text passwords are still an option even though they should never be used since they can be easily seen using a protocol analyzer capturing the traffic between devices. Message Digest 5 (MD5) hashing algorithm is available for all the routing protocols, but is considered to be broken. Still, it remains a better option than clear text passwords and uses 128 bits for authentication.

For Enhanced Interior Gateway Protocol (EIGRP) and Border Gateway Protocol (BGP), MD5 had been the only option for authentication until recently. Secure Hashing Algorithm (SHA) is better if available in your release of code. SHA is a family of different hashing algorithms: SHA-0 with a 160-bit hash algorithm is better than MD5, but is considered to be broken. SHA-1, also a 160-bit hash algorithm, fixes some of the issues of SHA-0, but has some of its own issues. SHA-2, which includes SHA224, SHA256, SHA512, SHA512/224 and SHA512/256, is currently considered secure. There are SHA-3 specifications, which use the same hash lengths as SHA-2, but with different internal operations. A version of SHA is available for BGP, EIGRP and OSPFv2, depending on code and licensing.

Let’s take a closer look at these protocols:

Routing Information Protocol (RIP)

Cisco implementation of RIPv2 supports two modes of authentication: clear text authentication and Message Digest 5 (MD5) authentication. Clear text authentication is the default when authentication is enabled. Note: RIP version 1 (RIPv1) does not support authentication.

RIPv2 uses a key chain, defined in the global configuration, to set the key string. Key chains are a generic set of keys that can be used with multiple processes on the Cisco router, including RIP, EIGRP, ISIS, OSPFv2, HSRP and others.

With the introduction of the cryptography algorithm as an option in the key chain’s configuration, you need to make sure the key chain is compatible with the protocol or feature that it’s referencing. RIPv2 doesn’t support this option within the key chain that it’s referencing. The configuration for MD5 requires the mode to be set at the interface level for IOS and IOS-XE.

With IOS-XR, the reference to the key chain and the mode of authentication are done on the same configuration line. With NX-OS (Nexus switches), the mode and reference to the key chain are on separate interface configuration lines.

ISO/IOS-XE

Key chain MyKey
 Key 1
  Key-string C1sc0
!
Interface fastethernet0/1
 ip rip authentication mode md5
 ip rip authentication key-chain MyKey

IOS-XR

key chain MyKey
 accept-tolerance infinite
 key 1
  key-string C1sc0
  send-lifetime 1:00:00 january 1 2017 infinite
  accept-lifetime 1:00:00 january 1 2017 infinite
!
router rip
 interface Gi0/0/0/1
  authentication keychain MyKey mode md5
 !
!
! Note that IOS-XR requires a lifetime configured for the key to be valid

NX-OS

Key chain MyKey
 Key 1
  Key-string C1sc0
!
Interface ethernet1/1
 ip rip authentication mode md5
 ip rip authentication key-chain MyKey

Open Shortest Path First version 2 (OSPFv2)

OSPFv2 has supported clear text and MD5 authentication for a long time, but HMAC-SHA is an option with the introduction of RFC 5709. Starting with IOS 15.4T, Cisco now supports SHA for authenticating OSPFv2. Prior to 15.4T, the key had to be configured on the interface. Now it can be configured within a key chain. As of the writing of this blog, IOS-XR and NX-OS don’t support SHA for authentication for OSPFv2.

IOS/IOS-XE

interface fastethernet0/1
 ip ospf message-digest-key 1 md5 cisco
!
router ospf 1
 area 0 authentication message-digest
!
or 
interface fastethernet0/1
 ip ospf message-digest-key 1 md5 cisco
 ip ospf authentication message-digest

SHA:

key chain MyKey
 key 1
 key-string C1sc0
 cryptographic-algorithm hmac-sha-256
!
interface fastethernet0/1
 ip ospf authentication key-chain MyKey

IOS-XR

key chain MyKey
 accept-tolerance infinite
 key 1
  key-string C1sc0
  send-lifetime 1:00:00 january 1 2017 infinite
  accept-lifetime 1:00:00 january 1 2017 infinite
!
router ospf 1
 area 0
  interface Gi0/0/0/1
   authentication keychain MyKey mode md5
 !

NX-OS

interface ethernet1/1
 ip ospf message-digest-key 1 md5 C1sc0
 ip ospf authentication message-digest
or

key chain MyKey
 key 1
 key-string C1sc0
!
interface ethernet1/1
 ip ospf authentication key-chain MyKey
 ip ospf authentication message-digest

Open Shortest Path First version 3 (OSPFv3)

Unlike the other routing protocols, OSPFv3 takes advantage of IPv6 that it’s riding upon. Security is considered to be native to IPv6’s protocol, so rather than reinventing the wheel, OSPFv3 can use it. You can define authentication and/or encryption for the OSPFv3 packets. NX-OS doesn’t presently support authentication for OSPFv3.

IOS/IOS-XE

interface fastethernet1/0
 ipv6 enable
 ipv6 ospf 1 area 0
 ipv6 ospf authentication ipsec spi 500 sha1 C1sc0123456789

or

ipv6 router ospf 1
 area 0 authentication ipsec spi 1000 sha1 C1sc0123456789

IOX-XR

router ospfv3 1
 area 0
  interface gigabitethernet0/0/0/1
   authentication ipsec spi 1000 sha1 C1sc012345678

Intermediate System Intermediate System (IS-IS)

IS-IS can authenticate at the interface, area or domain level. Prior to RFC 3567, IS-IS only supported clear text authentication. RFC 3567 added the support for MD5 authentication. RFC 5310, “IS-IS Generic Crypto Authentication,” introduces SHA as an authentication algorithm for IS-IS. No version of Cisco network operating systems supports SHA for IS-IS authentication.

On ISO-XR platforms, there are two types of authentication: link state packets and hellos. IS-IS supports using a key chain to define the key string or the string can be applied directly to the authentication command. With Nexus, the key string must be defined within a key chain—you cannot define it directly on the authentication command.

IOS/IOS-XE

interface fastethernet1/0
 ip router isis
 isis password C1sc0

or

key chain MyKey
 key 1
 key-string C1sc0
!
interface fastethernet0/1
 ip router isis
 isis authentication mode md5
 isis authentication key-chain MyKey

or

router isis
 authentication mode md5
 authentication key-chain MyKey

IOS-XR

router isis 1
 lsp-password hmac-md5 clear C1sc0
 interface giabitethernet0/0/0/1
  hello-password hmac-md5 clear C1sc0

or

key chain MyKey
 accept-tolerance infinite
 key 1
  key-string C1sc0
  cryptographic-algorithm hmac-md5
  send-lifetime 1:00:00 january 1 2017 infinite
  accept-lifetime 1:00:00 january 1 2017 infinite
!
router isis 1
lsp-password keychain MyKey 
 interface giabitethernet0/0/0/1
  hello-password keychain MyKey
 !

NX-OS

key chain MyKey
 key 1
 key-string C1sc0
!
interface ethernet1/1
 isis authentication-type md5 level-2
 isis authentication key-chain MyKey level-2

or

router isis
 authentication-type md5 level-2
 authentication key-chain MyKey level-2

Enhanced Interior Gateway Protocol (EIGRP)

EIGRP has been a Cisco protocol since 1993, replacing Cisco’s previous protocol IGRP. In 2013, Cisco released a draft request for comment (RFC) for EIGRP, which has since been published in 2016 (RFC 7868). Per the RFC, EIGRP supports authentication types MD5 and SHA. Currently, IOS-XR and NX-OS only support MD5 authentication for EIGRP.

IOS/IOS-XE

key chain MyKey
 key 1
 key-string C1sc0
!
interface fastethernet0/1
 ip authentication mode eigrp 35 md5
 ip authentication key-chain eigrp 35 MyKey

SHA:

router eigrp Fred
 address-family ipv4 autonomous-system 35
  af-interface fastethernet 0/1
   authentication mode hmac-sha-256 0 C1sc0
key chain MyKey
 key 1
 key-string C1sc0
!
interface fastethernet0/1
 ipv6 authentication mode eigrp 35 md5
 ipv6 authentication key-chain eigrp 35 MyKey

SHA:

router eigrp Fred
 address-family ipv4 autonomous-system 35
  af-interface fastethernet 0/1
   authentication mode hmac-sha-256 0 C1sc0

IOS-XR

key chain MyKey
 accept-tolerance infinite
 key 1
  key-string C1sc0
  send-lifetime 1:00:00 january 1 2017 infinite
  accept-lifetime 1:00:00 january 1 2017 infinite
!
router eigrp 35
 address-family ipv4
  interface gigabitethernet0/0/0/1
   authentication keychain MyKey
 address-family ipv6
  interface gigabitethernet0/0/0/1
   authentication keychain MyKey

NX-OS

key chain MyKey
 key 1
 key-string C1sc0
!
interface ethernet1/1
 ip authentication mode eigrp 35 md5
 ip authentication key-chain eigrp 35 MyKey
key chain MyKey
 key 1
 key-string C1sc0
!
interface ethernet1/1
 ipv6 authentication mode eigrp 35 md5
 ipv6 authentication key-chain eigrp 35 MyKey

Border Gateway Protocol (BGP)

BGP is a complicated protocol—it has to be. BGP is the only routing protocol able to scale to advertising the hundreds of thousands of routes found on the internet today. BGP is also used to support other applications and protocols such as layer 2 and layer 3 VPNs within an MPLS network. In the public internet, there are individuals that want to be disruptive, hold others hostage or redirect traffic for the purpose of theft. BGP offers authentication, as well as other security options. IOS-XR is the only Cisco network operating system capable of SHA authentication. All the others only use MD5.

If security of your BGP relationships and updates are a significant concern, you can always use an IPsec tunnel to peer the neighbors through. Depending on the crypto capability of your release of code, you could see a significant increase in security even though you’re not using the protocol’s built-in authentication. Let’s look at what’s available in the protocol itself:

IOS/IOS-XE

router bgp 65001
 neighbor 192.168.5.1 password C1sc0

IOS-XR

router bgp 65001
 neighbor 192.168.5.1
  password clear C1sc0

SHA:

key chain MyKey
 accept-tolerance infinite
 key 1
  key-string C1sc0
  cryptographic-algorithm hmac-md5
  send-lifetime 1:00:00 january 1 2017 infinite
  accept-lifetime 1:00:00 january 1 2017 infinite
!
router bgp 65001
 neighbor 192.168.5.1
  keychain MyKey
 !

NX-OS

router bgp 65001
 neighbor 192.168.5.1 remote-as 65002
  password 0 C1sc0

It’s clear there are varying degrees of consistency between the Cisco network operating systems when it comes down to authenticating the routing protocols. I’ve examined the options of one router vendor. Consider the additional complexities of a multi-vendor shop with multiple router manufacturers, each with their own way of doing things.

The bottom line is we have to protect our network infrastructure. No matter which routing protocol you use, there are options for how to authenticate the neighbor to ensure the updates are coming from a trusted source. Use the strongest common authentication hashing algorithm you can find. Network technologies evolve, vendors evolve and options evolve, so reexamine periodically what is available and upgrade whenever you have the opportunity.

Related Courses
ICND1 v3.0 – Interconnecting Cisco Networking Devices, Part 1
CCNAX v3.0 – CCNA Routing and Switching Boot Camp
ROUTE – Implementing Cisco IP Routing v2.0
TSHOOT – Troubleshooting and Maintaining Cisco IP Networks v2.0
BGP – Configuring BGP on Cisco Routers v4.0
ARCH – Designing Cisco Network Service Architectures v3.0
MPLS – Implementing Cisco MPLS v3.0



from
CERTIVIEW

Thursday, 26 January 2017

Data Privacy Day: Why to Care and What to Do

On January 27, 2014, when the United States Congress adopted S. Res. 337, a nonbinding resolution expressing support for the designation of a “National Data Privacy Day” to be observed on January 28, there wasn’t a lot of time to get the word out, even though the event had been around awhile.

But that’s the date on which I first became aware of Data Privacy Day, and in the years since the bill’s passage, I’ve been a champion of personal data privacy as well as data privacy at work.

The choice of January 28 was no fluke. On that date in 1981, the Council of Europe held the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data. Luckily, they shortened the name a bit for the event and signed Convention 108, the first legally binding international treaty dealing with privacy and data protection.

Data Privacy Day began in the U.S. and Canada on January 2008 as an extension of the Data Protection Day celebration in Europe. The international event promotes awareness of privacy and data protection best practices. Recognized in the U.S., Canada and 27 European countries, Data Privacy Day’s educational initiative is to focus on raising awareness among users and businesses of the importance of protecting the privacy of their data online. This has become even more important as social networking has increased in popularity over the years as have security breaches.

Data Privacy Day’s goal is to educate and empower businesses, consumers and families with the knowledge and best practices to better protect themselves from hackers, viruses and malware that can put their information as risk. Data Privacy Day brings together not only technology folks but also government officials, educators, those involved with nonprofits and leaders across industry sectors.

So what can you do about data privacy? If the security of your data and privacy matters to you, Data Privacy Day is a great time to start actively protecting your info. Target, Sony and Yahoo all learned the hard way. It’s in the best interest of every business to practice good data stewardship or they’ll be the next lead story on CNN or the next big headline in The New York Times. Whether it’s your bank, doctor, pharmacy or even workplace, encourage them to protect your data sufficiently. Don’t ever assume your data is protected. Be your own data privacy advocate.

The National Cyber Security Alliance coordinates the promotion of Data Privacy Day activities. Here are some of the things they encourage us to do to promote Data Privacy Day:

  • Socialize it. To protect your personal data, you don’t have to be afraid of social media. Tweet privacy tips. Post messages on your Facebook and LinkedIn accounts. You can use the official Data Privacy Day hashtag #PrivacyAware and follow @DataPrivacyDay to stay up to date on all of the latest Data Privacy Day tips to share with your connections and followers.
  • Make it official. You can suggest your organization show its support of Data Privacy Day by becoming an official Data Privacy Day Champion. Last year, more than 450 organizations enrolled as Data Privacy Day Champions. It’s quick and easy to sign up.
  • Make it personal. Data privacy starts at home, so make sure your loved ones know the risks to their personal information, especially children and teenagers who may be more likely to overshare on social media channels. Secure your information if you have a shared accounts on your PC, tablets or smart TVs that are connected to multimedia outlets like Netflix, Hulu, Amazon Prime and iTunes.

Related Courses
Cybersecurity Foundations
Legal Issues in Information Security
Certified Information Privacy Professional US Private-Sector (CIPP/US) Prep Course



from
CERTIVIEW

Sunday, 22 January 2017

Why TOGAF is the Framework to Unite IT and the Business

Business transformation is driving a change in the relationship between IT and the business. Internal and external forces are requiring organizations to be more responsive to customer needs and achieve operational and technological efficiencies. IT professionals traditionally only had to focus on the technology architecture. Now, the overall enterprise architecture must be considered in order to eliminate barriers between technology capabilities and business strategy.

What is TOGAF®?

The Open Group’s Architecture Framework, TOGAF, is a globally recognized standard for developing enterprise architecture. The comprehensive framework includes techniques and a set of supporting tools to provide organizations with the capability to ensure all architectural components are aligned to the strategic direction of the business.

The Benefits of TOGAF

  • Ensure that everyone speaks the same language.
  • Avoid lock-in to proprietary solutions by standardizing on open methods for enterprise architecture.
  • Save time and money, and utilize resources more effectively.
  • Achieve demonstrable return on investment (ROI).

TOGAF Summary

The TOGAF framework can best be illustrated by the following five layers, as defined by The Open Group:

  1. Architecture Principles, Vision and Requirements: This layer describes the initial phase of an architecture development cycle. It includes information about defining the scope, identifying the stakeholders, creating the architecture vision, and obtaining approvals.
  2. Business Architecture: Describes the development of a business architecture to support an agreed architecture vision.
  3. Information Systems Architecture: This layer describes the development of information systems architectures for an architecture project including the development of data and application architectures.
  4. Technology Architecture: This layer describes the development of the technology architecture for an architecture project.
  5. Architecture Realization: This layer is the realization of the architectural components that are necessary for driving business value.

Traditionally, IT professionals have been focused on just the Technology Architecture layer, as well as specific technologies and technology solutions. Limiting the focus to a single layer is equivalent to working in a silo—making it nearly impossible to achieve a holistic view of the business. Attention needs to be devoted to all layers to ensure proper alignment with business principles, vision, requirements and architecture.

Organizations increasingly require IT professionals to gain a more thorough understanding of the business to ensure that technology solutions adequately support business requirements, vision and strategy. Incorporating TOGAF methods facilitates an understanding of the business and achieves IT results that help drive business value.

For a more detailed look at how organizations can ensure technology is aligned with business strategies, see my white paper, The Power of Linking Business Analysis and TOGAF® to Achieve IT Results.

Related Courses
TOGAF 9.1 Level 1 and 2



from
CERTIVIEW

Monday, 16 January 2017

3 Essentials to Avoid IT Project Failure

Projects are often complex, made up of a large number of moving pieces and bring numerous challenges to those involved. The reality is projects don’t always go the way we want them to. We often find ourselves being asked to do more with less and at a faster pace than we’re comfortable with. On occasion, we are successful in our efforts despite these restrictions.

Many key factors contribute to project success, but three of the fundamentals stand out—stakeholder identification and analysis, effective communication, and identifying project requirements, managing stakeholder expectations and scope of work.

Stakeholders

Stakeholders may affect or be affected by the project—through a project decision, activity or outcome—or they may simply perceive themselves to be affected. The impact or perceived impact can be either positive or negative in nature.

Most projects have a large number of stakeholders. Identifying all stakeholders increases the chance of project success. You must secure and document relevant information about their interests, interdependencies, influences, potential involvement, and probable impact on the project definition, execution and final results. After obtaining this information, classify the stakeholders according to their characteristics. This will make it easier to develop a strategy to manage each stakeholder. An increased focus on key relationships is critical to project success.

Communication

How do you manage expectations? Through communication. It’s vitally important that there is a well-defined communication plan tailored to fit the project and stakeholders.

One component of critical thinking is to learn by questioning. The Project Management Institute (PMI®) indicates that 90 percent of a project manager’s job is communication, however, it is not limited to just talking. Proper communication involves listening, reading reports, generating reports, filtering information from one group to another, etc. To do this effectively, you need a well-defined communication management plan. The creation of any plan like this can be broken down into six questions that need to be asked continually: Who, What, When, Where, How and Why?

For example:

  • Who needs to be communicated to?
  • What needs to be communicated?
  • When does that need to take place?
  • Where does it need to happen?
  • How is it going to happen?
  • Why does it need to happen?

Project Requirements, Stakeholder Expectations and Scope of Work

Before beginning your project, make sure you have clarified all goals, objectives and requirements. Obtaining clarity on what is required and ultimately gaining buy-in from the major stakeholders are crucial. These requirements, along with related goals, objectives and deliverables, become the scope of work that must be completed and will be refined over the life of your project. But if you do not start with a solid understanding of what you’re trying to achieve, you might as well not begin at all.

It will be difficult, if not impossible, to achieve success from a stakeholders’ point of view if there is a lack of clarity concerning their project perception and expectations. This is why stakeholders must be identified and their expectations analyzed as early in the project lifecycle as possible.

Keeping the three key steps—identification and analysis of project stakeholders, the creation and use of an effective communication plan, and proper identification of project requirements, stakeholder expectations and accurate decomposition of the scope of work—at the forefront during project planning and execution greatly enhances the ability to achieve success.

For a deeper dive into how to guarantee project achievement, view my white paper, Three Steps to Ensure the Success of Your IT Projects.

Related Courses
IT Project Management
Project Management Fundamentals
Project Management, Leadership, and Communication
Requirements Development, Documentation and Management



from
CERTIVIEW

Tuesday, 10 January 2017

Want a Strong ITIL Strategy? Start with the Right People

Many ITIL® training alums may wonder, “Now that I’ve learned these best practices, how do I actually implement them in my workplace?”

An effective ITIL strategy is the product of a concerted focus on people, process and technology, as well as an ongoing, cost effective and valuable IT service management (ITSM) practice.

While processes and technology play key roles in ITSM, the role of people resources and capabilities cannot be understated. When entering an organization, people start out as a resource (raw material). Training and skills development help your people mature into capable teams that have the ability to carry out your ITIL initiative, execute your processes and deliver value to customers and users. Without an ongoing training program, your ITIL initiative is at risk.

Poorly trained managers and staff:

  • Lack the understanding and motivation to drive the ITIL initiative forward, which puts the success of IT Service Management at risk.
  • Lack the skills and knowledge to implement the processes and build functional teams.

With a concerted training program and a training plan for each role, your team will become a unified powerhouse that will propel your ITSM initiative to long-term success. Service and process owners will carry out their roles effectively, ensuring that quality robust services are defined while supporting processes guarantee a quality delivery. Practitioners—the individuals and teams carrying out the various steps of your processes—will have the skills and understanding to execute the processes consistently and efficiently, delivering high availability and performance of your services.

Your staff may have completed the first step—ITIL Foundation training—but that only establishes ITIL groundwork. You still need the skilled people capabilities to truly be successful with your ITIL initiative. In order to equip people assets, your staff’s foundational knowledge must be enhanced with ITIL Intermediate training.

For more tips on how to implement ITIL’s best practices into your organization, see the white paper, You’ve Completed ITIL® Foundation: Now How to Implement It.

Related Training
ITIL Foundation
ITIL Intermediate courses



from
CERTIVIEW

Thursday, 5 January 2017

7 Products Revealed at AWS re:Invent To Get Excited About

AWS continued to rock the technology world by revealing an array of new services at its annual re:Invent conference. With the release of new products and feature add-ons to existing properties, AWS shows no signs of slowing down.

If you’re looking for a list of the most impactful new AWS services, you’ve come to the right place. The following are the 7 coolest product announcements from re:Invent 2016.

  1. AWS Athena (new service) – Anyone who has used the S3 service knows of its data querying limitations. Prior to Athena, running rich queries was a complex process, having to build your own client-side database and breaking out all the relevant bits (like file type, file name, file content) you’d like to query. With Athena, AWS takes care of this process for you with a fast, hands-off, pay-as-you-go service. Athena uses Presto (one of the most popular tools to query cloud-based data) and works with many different types of formats.
  2. AWS Rekognition (new service) – Although AWS pretty much owns the public cloud, it’s long been far behind nearly everyone else (IBM, Google and Microsoft) in terms of machine learning. With the release of Rekognition (and three other new services, Lex, Polly and MXNet), AWS is finally putting a stake in the machine learning ground. Rekognition is an image recognition service that analyzes pictures to tell you details—detecting objects and scenes, while using facial recognition and analysis to identify people. But maybe the most exciting thing about the AWS launch into AI is CEO Andy Jassy’s statement that “more will be coming in 2017.” Expect AWS to finally catch up to the machine learning competition this year.
  3. AWS Shield (new service) – Shield is an AWS service built to holistically protect your AWS assets against a distributed denial-of-service (DDoS), one of the worst types of Internet attacks. Shield has pre-packaged and configurable protection rule sets you can enable across various types of back-end services, including your own custom-built EC2 or Lambda-based solutions. With the service, AWS is also providing access to the 24×7 DDoS Response Team (DRT), which can either assist during a live attack or preemptively help with rule creation.
  4. Amazon Lightsail (new service) – Need a cheap-as-possible, point-and-click managed web server for your business, kid’s soccer team or bowling club? Do you use GoDaddy, Digital Ocean or a similar provider to easily set up and manage things like DNS, monitoring and key management? Lightsail offers an SSH-accessible, easy-to-manage server—perfect for non-techies—in five different price points ranging from $5 to $80 a month. With Lightsail, you get all the power of EC2 with the ease of a GoDaddy-like interface.
  5. C# for Lambda (new feature) – Lambda (which offers serverless computing) was one of the hottest new services released at re:Invent in 2014, but with its limited language support (Python, Java, Node.js), the Microsoft crowd was left pretty much out in the cold. With the addition of C# support, Lambda now allows .NET developers access to the same pay-as-you-go, infinitely scalable, functional programming model that others have enjoyed for two years.
  6. AWS Batch (new service) – The scale of AWS, in terms of both raw number of services (now more than 60) and the features of each service (many have more than 100 available API calls), means you have unlimited options when processing big data or rendering images, video or audio. With unlimited options comes great confusion—which of the 80-plus instance types work best? How many do you need? Where should they output the data to? AWS Batch dynamically provisions the optimal quantity and type (like high-CPU or high-RAM) based on the volume and specific resource requirements of each batch job submitted. Like many other services including VPC, IAM and CloudFormation, Batch is free to use and you’re only charged the normal cost of the underlying resources.
  7. AWS Glue (new service) – Glue is a fully managed ETL service that simplifies and automates the difficult parts of discovering, transforming and moving data. It can connect to any JDBC-compliant data store (either AWS-based, or even on-premises), automatically logging into and crawling your schemas. It then suggests schemas and transformations (which you can edit, if you like). Once you accept the transformations, Glue goes about running the actual data flow job to move the data out of the source and into the sink. Like AWS Batch, it’s free to use and you’re only charged the cost of the underlying resources.

Other announcements of note: Snowmobile (an Exabyte-scale data transfer service), Elastic GPUs for EC2 (allows you to dynamically attach GPUs to current gen EC2 servers), F1 instances for EC2 (FPGA-based EC2 instances), Amazon Pinpoint (targeted push notifications—useful for personalized marketing campaigns), X-Ray (allows developers to more easily analyze and debug issues in their distributed systems) and Step Functions (provides a visual workflow interface to simplify the coordination of components in a microservices-based architecture). If you’re unfamiliar, it’s worth noting that FPGAs and GPUs are highly useful for machine learning workloads.

A full list of all service announcements from re:Invent can be found on AWS’ “Product Announcements” page. The launch into machine learning this year looks especially promising for those looking to build out ML platforms to support either their own applications or ones centered on Amazon Echo. If 2016 was “the year of IoT” for AWS, they’ve just set up the starting block for 2017’s “year of ML.”

Related Training
AWS Training



from
CERTIVIEW

Sunday, 1 January 2017

The Key to Achieving True Agility in an Agile Environment

Agility is the ability to adapt to changing conditions and respond to evolving needs. In a rapidly shifting, complex organizational environment, greater flexibility contributes to lower project costs and more effective project outcomes. This is achieved by making project management systems more responsive to discoveries, revelations and changes that arise during execution.

Components of greater organizational agility:

  1. Shorter planning windows (more frequent, shorter projects linked together into a cohesive program).
  2. Lighter documentation (made possible by shorter planning windows and hands-on strategic coordination provided by program management).
  3. Increased customer involvement (supported by short projects and program oversight).

Every organization can be more agile but only to the extent that they are willing to plan tactically in the short run and strategically in the long run. The agile approach requires near-term specifics and long-term generalizations.

An organization becomes more agile if the systems that support projects allow for more frequent re-evaluation and alteration of a project’s execution strategy. Increased agility is not simply a consequence of adopting a few Agile project management tools. A project management system is deemed agile based on how adaptive they are to changing conditions, regardless of techniques employed.

Waterfall-style project planning holds organizations back from being more agile. Heavy up-front analysis and documentation result in rigid plans for outcomes long into the future. By the time an exhaustive study is complete, the requirements are already out of date because of technical and market changes.

The solution is Agile program management, which enables the delivery of benefits through a series of shorter projects that are strategically tied together. These short projects allow the overall execution strategy to be regularly revised. Documentation is kept light to support flexible planning. Detailed documentation, such as plans and expectations, is only developed for near-term deliverables and general documentation is only created for long-term outcomes. The combination is a more agile project delivery system that better serves ever-evolving customer needs, emphasizing the short-term without losing sight of the long-term.

Greater agility is within reach of any organization with or without the use of classical Agile software development techniques. Agility comes from having a responsive, flexible project management system.

See the full white paper, A New Trend in Agile – Incorporating Program Management, for more details about enhancing organizational agility.



from
CERTIVIEW

Tuesday, 20 December 2016

Top 5 Blog Posts of 2016

Global Knowledge’s top blogs of 2016 spotlight the rise of developers, the fear of hackers and an overwhelming love of tech toys.

We learned plenty about our readers when examining the most-viewed posts of the year.

They sought solutions—how can organizations improve software delivery to customers?

They sought security—how is the federal government planning to protect the data of private citizens?

They wanted a peek at the hottest gadgets—who doesn’t?

Here are our top 5 blog posts of 2016:

5. How the First Email Message was Born

“That first email was sent from one Digital Equipment Corporation computer to another DEC-10, which happened to sit beside each other in (Ray Tomlinson’s) lab.”

We send and receive so many emails a day that we tend to take it for granted. Well, so did its creator, Ray Tomlinson.

Tomlinson sent the first email in 1971 and thought so little of it that he didn’t even save the test message as a keepsake. It was so insignificant to Tomlinson that he only vaguely remembers the original message—it was something resembling “QWERTYUIOP.”

In fact, he didn’t realize the significance of his invention until he later showed it to a colleague.

Tomlinson passed away in March at the age of 74.

4. Federal Agencies Prepare for Massive Cybersecurity and Privacy Revamp

“The president’s unprecedented plan is a 35 percent increase in government-wide cybersecurity spending from the 2016 federal budget.”

This blog is probably more relevant now than when it posted in March. With Yahoo’s massive data breach and the recent DDoS attacks that impacted major web properties such as Netflix and Twitter, cybersecurity is a major concern for both businesses and consumers.

Recent intelligence findings concerning Russia’s influence in the presidential election have intensified fears as well. Can the federal government protect its own citizens from hackers?

In February, President Barack Obama created the Cybersecurity National Action Plan (CNAP), proposing a $19 billion budget to fund cybersecurity and update the government’s outdated IT systems. This post examines the details of the president’s plan and how Global Knowledge cybersecurity training can aid federal employees.

3. What Developers Can Expect in 2016

“As professional developers, we should know more than one programming language. … The question always remains, ‘Which language should I learn?’”

Author and developer Bradley Needham made some spot-on predictions in this early-2016 post.

He anticipated the importance of DevOps and tools that aid its success. He suggested developers learn more than one programming language and foresaw advancements in wearable tech and the software that drives them.

Needham also touches on artificial intelligence concerns that are sweeping the industry and stresses the need for software professionals to proactively work together to make sure “we get it right.”

2. Are DevOps and ITIL® in Conflict or Complementary?

“DevOps provides us with a fresh perspective to examine the ITIL framework in several key areas that will improve core processes, functions and principles within ITIL.”

Author Paul Dooley doesn’t leave any gray area here—the answer, resoundingly, is “complementary.” Dooley notes there are no conflicts between DevOps and ITIL, and the collaborative nature of DevOps adds value to service transition, service operation and the Continual Service Improvement process.

Since ITIL is the hub of best practices for the IT industry, service providers benefit greatly by incorporating harmonizing services like DevOps. If implemented correctly, this type of practice should strengthen the alignment between the business and customer.

1. Tech the Halls: Top 12 Gadgets of the Holiday Season

“Whether you prefer to stand in line for hours to buy the newest smartphone or long for the days of 8-bit gaming, there’s a perfect tech toy for you this holiday season.”

Virtual reality gaming, video doorbells, app-controlled droids … the future is here when it comes to the most coveted tech toys for the 2016 holiday season.

Global Knowledge’s tech lovers selected the gizmos they want most this year. Some are easier to come by than others. (Apologies to anyone hoping to find an NES Classic under their tree on Christmas morning. Most stores sold out the day they went on sale.)

Whether you want an iPhone 7 or a new pair of wireless headphones, the best part about filling out your tech toy wish list is feeling like a kid again.



from
CERTIVIEW

Sunday, 11 December 2016

DDoS Blog Series Part 2: How Do Consumers and Businesses Protect Against Cyber Crime?

ddos-attacks-300x300_yellow“This demonstrates the fragility of the network and infrastructure.”Shawn Henry, chief security officer, Crowdstrike

Several spectacular attacks in the past few months have demonstrated the power of distributed denial-of-service (DDoS) attacks and the importance of cybersecurity. DDoS attacks against blogger Brian Krebs, hosting provider OVH and domain name system provider Dyn crippled a reporter’s web site, shut down cloud-based customers and blocked access to major services such as Twitter, Amazon, Netflix, Airbnb and Etsy.

What can individuals and organizations do to prevent themselves from becoming an unwitting accomplice to an attack? Furthermore, what can organizations do to protect themselves?

A denial-of-service (DoS) attack allows cybercriminals to disable an organization’s Internet presence or block access to the business’s networks. Identifying these attacks are more straightforward, or at least easier to resolve, because they seem to originate from identifiable Internet Protocol (IP) addresses. The victim can then block incoming Internet traffic from the specific IPs.

When hackers launch a DDoS assault, the problem becomes much larger for two reasons:

  1. The number of computers performing the attack can be huge—an estimated tens of millions in the case of Dyn.
  2. The volume of the attack magnifies dramatically—an estimated 1.2 terabits per second in the Dyn attack, according to Chief Strategy Officer, Kyle York.

Many hackers deploy a remote access Trojan (RAT) to control usurped computers. If a hacker controlled one system and used it to attack and deny service to another organization, that wouldn’t be very effective. On the other hand, large-scale remote-control networks are often called Botnets, made up of malware (“bots”) or infected devices (“zombies”). Under direction of massive command-and-control networks, Cybercriminals use these hijacked systems to carry out a DDoS attack.

In the latest series of attacks, hackers used software called Mirai, an Internet-of-Things (IoT) Botnet. Instead of using infected home computers, they used smart devices found in everyday homes—webcams, DVRs, thermostats, TVs and refrigerators. Many IoT devices have built-in vulnerabilities, such as weak default passwords and extraneous network protocols. Mirai was able to exploit these weaknesses and launch massive data floods across the Internet.

There are numerous ways for consumers to protect against these kinds of attacks:

  • Keep up to date on your vendor’s security patches. This includes Microsoft, Apple, Adobe and Google software.
  • Have a currently-licensed copy of highly-rated antivirus or anti-malware software and keep the signatures current. When in doubt, check one of the sites that rank these products. This doesn’t need to be an expensive proposition—there are several free antivirus products with high ratings in the industry that suffice. Further, some Internet service providers, like Comcast, supply you with software as part of your subscription. If you or a direct family member work for the U.S. government, you are entitled to free antivirus protection as well.
  • Practice vigilance on the Internet; watch for suspicious web sites or browser behavior. Also, understand that one of the largest vectors for malware is through email attachments.
  • If it’s free on the Internet, it’s too good to be true—including pirate sites for downloading movies, TV shows, music, games and software.
  • For your IoT devices, set them up with long, strong and complex passwords. If you can, look for services such as Telnet and Secure-Shell (SSH) and disable them. Occasionally, visit the vendor’s web sites to make sure you have the latest software for your smart devices. Lastly, when a manufacturer recalls their IoT-based product because of software insecurities, make sure you take advantage of it!

Any organization that has an Internet-facing presence could be the subject of a DDoS attack, which can be crippling, even for the largest companies. There are basic protections and mitigations any organization can invoke. These include:

  • Follow industry-standard best practices:
    • Be certain that each Internet-facing server only performs a single task, such as being a web server or responding to DNS queries.
    • Perform system hardening by removing unnecessary services and staying current with security patches.
    • Monitor your systems for signs of an attack.
  • Prioritize redundancy by utilizing:
    • multiple Internet service providers.
    • multiple infrastructure resource servers, such as DNS on different IP networks.
    • geographically-distributed data centers and processing.
  • Consider using an anti-DDoS service such as Akamai/Prolexic, Amazon CloudFront or Cloudflare. Some of these organizations even offer free basic anti-DDoS products. Alternately, every major Internet service provider has services they can activate within their networks.

Related Post
DDoS Blog Series Part 1: Evolving Internet Attacks Turn Smart Devices Against You

Related Courses
Cybersecurity Foundations
Certified Network Defender (CND)
Certified Ethical Hacker v9



from
CERTIVIEW

Wednesday, 7 December 2016

4 Reasons Why Now is the Right Time to Learn Web Development with TypeScript

TypescriptWide-1Whether you’re a JavaScript beginner, expert or fanatic—now is a great time to learn TypeScript, a programming language designed to make JavaScript strongly typed and capable of supporting large-scale web applications. TypeScript is a superset of JavaScript, and its recent release, TypeScript 2.0, adds extra features, such as glob support, to make a developer’s life easier. It provides the flexibility to write JavaScript programs that can grow over time without becoming too unwieldy and frees you to concentrate on learning JavaScript frameworks, such as Express and Angular, that empower you to build both RESTful web services and modern client applications.

I just authored a new 5-day course on TypeScript—Essential TypeScript 2.0 with Visual Studio Code—a culmination of a four-month odyssey in which I not only had to learn TypeScript grammar and syntax, but also master an entirely new technology stack and toolchain. Here is a list of topics included in the course:

  1. Introduction to TypeScript
  2. TypeScript Language Basics
  3. Using Visual Studio Code with TypeScript
  4. Task Automation, Unit Testing, Continuous Integration
  5. The TypeScript Type System
  6. Functional Programming
  7. Asynchronous Programming
  8. Object-Oriented Programming
  9. Generics and Decorators
  10. Namespaces and Modules
  11. Practical TypeScript with Express and Angular

I thoroughly enjoyed the process of adding a new weapon to my arsenal as a software developer and the chance to venture off in an entirely new direction. Here are four reasons why now is the right time for you to learn TypeScript.

1. Revenge of JavaScript

A compelling reason to learn JavaScript is that it can be used to write apps for more than just web browsers–you can use it to write desktop and mobile apps, as well as back-end services running in the cloud. JavaScript has unwittingly become one language to rule them all.

Web development has also matured to the point where it’s possible to write an app that has nearly the same interactivity and responsiveness as a traditional desktop application. With the advent of Single Page Applications (SPAs), turbocharged JavaScript engines quickly render rich, interactive web pages. It’s the perfect time to build SPAs because second generation frameworks have emerged that take web development to a whole new level and implement the Model-View-ViewModel (MVVM) pattern (or some MV-* variation), providing benefits such as better separation of concerns, testability and maintainability. Frameworks like Angular, Aurelia and React-Redux also provide tools for quickly scaffolding new applications and preparing them for production.

TypeScript has emerged as the language of choice for building many of these kinds of modern web apps because strong typing enables features we take for granted, such as interfaces and generics. It also provides capabilities most developers couldn’t live without, such as intellisense, statement completion and code refactorings.

2. JavaScript Has Grown Up

In 2015, JavaScript had its most significant upgrade since it was created in 1995 by Brendan Eich in a 10-day hackathon. With the release of ECMAScript 2015, JavaScript received a slew of new features, including classes, inheritance, constants, iterators, modules and promises. TypeScript not only includes all ES 2015 features, but it fast forwards to future versions of ECMAScript by supporting proposed features such as async and await operators, which help simplify asynchronous code. TypeScript lets you use advanced features of JavaScript by transpiling down to ES5, a flavor of JavaScript compatible with most browsers.

When you put modern JavaScript together with TypeScript, you get a powerful combination that gives you just about everything you might want for building SOLID applications that can run in the browser, on the server or on mobile and desktop platforms.

3. Shiny New Tools

The nice thing about TypeScript is that you’re free to use whatever tool you like, from a full-fledged IDE like Visual Studio or Web Storm, to a lightweight code editor, such as SublimeText, Atom, Brackets or Visual Studio Code.  While there’s nothing wrong with any of these options, I prefer using VS Code for TypeScript development because it comes with TypeScript in the box and the team eats their own dog food by using TypeScript to build the editor.

Coming from a C# background, where I was confined to using Visual Studio on Windows, I appreciate being able to run VS Code on my Mac. VS Code starts quickly and I can open it at a specific folder from either the Finder or Terminal. I also found navigation in VS Code to be straightforward and intuitive, and you can perform many tasks from the command palette, including custom gulp tasks. VS Code functions as a great markdown editor with a side-by-side preview that refreshes in real time as you make changes. It has Git integration and debugging support, as well as a marketplace of third-party extensions that provide a variety of nifty services, such as TypeScript linting and Angular 2 code snippets. Put it all together and VS Code is a perfect fit for TypeScript development.

4. Living in Harmony

One of the most compelling reasons I can think of for picking up TypeScript is that it’s the brainchild of the same person who created C#, Anders Hejlsberg, who also invented Turbo Paschal and Delphi. Having such an amazing track record, I have a high degree of confidence in following him into the world of web and native JavaScript development. Anders has made it possible to be more productive and write code that is more resilient because the TypeScript compiler is able to catch problems at development time that would otherwise only become apparent at runtime.

Lastly, it’s significant that Anders did not choose to create a language that is different than JavaScript, such as CoffeeScript, but rather one that includes all of JavaScript with optional type annotations that disappear when TypeScript is compiled down to plain old JavaScript. In fact, all JavaScript is valid TypeScript, and you can insert annotations or leave them out wherever you like, giving you the best of both dynamic and static typing. In other words, TypeScript does not dictate that you follow any of its prescriptions.

All in all, the latest version of TypeScript gives developers what they pine for—additional features that create flexibility, productivity and power. But most importantly, it creates less headaches. I look forward to you joining me in the Essential TypeScript 2.0 with Visual Studio Code course to discover TypeScript’s capabilities.

Happy coding!

Related Courses
Essential TypeScript 2.0 with Visual Studio Code



from
CERTIVIEW

Sunday, 4 December 2016

DDoS Blog Series Part 1: Evolving Internet Attacks Turn Smart Devices Against You

ddos-attacks-300x300_purple“Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down.”Bruce Schneier, security expert

A denial-of-service (DoS) attack is a cyber assault intended to block legitimate access to organizations and servers on the Internet. There are two types of DoS attacks: a standard DoS and a distributed denial-of-service (DDoS).

A classic DoS attack is initiated by only a small number of Internet Protocol (IP) addresses—often the assault originates with a single computer or network.

A DDoS attack uses hundreds, thousands or even millions of IP addresses and systems. On Oct. 21, in the largest attack of its kind, hackers used vulnerable home devices such as DVRs and webcams to flood the services of Internet infrastructure provider Dyn. This DDoS attack overwhelmed the victim’s Domain Name System (DNS) servers and made many well-known Internet domains, such as Netflix and Twitter, unavailable for a short period of time.

The attack against Dyn used a Botnet of web-facing devices under control of hacker software called Mirai. Traditionally, hackers use Botnets made up of compromised home computers, PCs and other general purpose systems. Unsuspecting end users open malicious email attachments or respond to prompts and pop-ups from malicious web sites, thereby infecting their computers and becoming part of the Botnet. Mirai was different; it used smart devices like web-accessible baby monitors, surveillance cameras, printers and other Internet of Things (IoT) devices to flood Dyn’s servers on behalf of the attackers.

Typically a simple DoS attack depends on someone sending a malcrafted message across a network—such as the infamous WinNuke—to a target system or have someone open a poisoned file in an application. This could cause a program to close involuntarily, a Blue Screen of Death in Windows or a kernel panic on Mac OS X.

Malcrafted message DoS are effective as single attacks until the victim strengthens their network or patches their systems, at which point hackers are blocked and the attack fails.

Whether a DoS or DDoS, cybercriminals can use three or four other nefarious attack mechanisms:

  • Application floods—servers providing Internet resources are overwhelmed by malicious requests. These could be, for example, against a company’s web servers or against supporting infrastructure. The hacker group Anonymous famously targeted the Church of Scientology with an application flood in 2008, overwhelming their servers and knocking their web site offline for a short time.
  • State-Exhaustion attacks—similar to application floods, these render the underlying computer or network software incapable of response by targeting the connections that are initiated to the victim systems. Whether to web servers or DNS, a system that is deluged cannot respond to legitimate connection requests.
  • Volumetric attacks—as the name implies, they inundate a company’s customer-facing portal or their ISP with malicious network traffic beyond the victims’ ability to respond.
  • Protocol attacks—the objective is to disable complete networks and organizations by misusing normal network traffic, violating the rules for standard communication. This disrupts the ways computers connect to each other or exchange information. Many of the Internet protocols we use today were developed in a far more simplistic time. Hackers can read the Internet standards (called a Request for Comments or RFC) and look for opportunities to use these protocols in a criminal way.

Between the work done by Internet service providers, regulators and the government, efforts are underway to remove the underlying mechanisms used in DoS and DDoS attacks. Part 2 of this blog series will examine how organizations and individuals can avoid becoming victims.

Related Post
How the Seismic DDoS Attack on Dyn Shook the Internet

Related Courses
Cybersecurity Foundations
Certified Network Defender (CND)
Certified Ethical Hacker v9



from
CERTIVIEW

Monday, 28 November 2016

PMBOK Guide 6th Edition: A Deep Dive into the Changes

pmbok-deep-dive-changesAre you prepping for the PMP exam? What should you know about the impending new edition of A Guide to the Project Management Body of Knowledge (PMBOK® Guide) before scheduling your examination?

Approximately every four to five years, the Project Management Institute (PMI®) updates the PMBOK® Guide. Currently in its fifth edition, the guide is recognized as an American National Standard by the American National Standard Institute (ANSI). In March of this year, PMI released the exposure draft of the sixth edition for review and commentary. I’m going to provide an overview of the exposure draft and how it will impact the PMP® exam.

According to PMI, we can expect the draft release of the sixth edition in the first quarter of 2017 and the final release in the third quarter of 2017. The draft release will mainly be used by training organizations to allow us time to update our course materials. The PMP and CAPM® examinations are currently scheduled to change over from version five to version six in Q1 of 2018.

Embraces Agile in a Significant Way

In the sixth edition of the PMBOK® Guide, each knowledge area will contain a new section entitled “Approaches for Agile, Iterative and Adaptive Environments.” These sections will describe how the associated knowledge area will integrate, be affected by, and benefit from the adaptive approach Agile utilizes. Additional Agile-related details will also be included in a related appendix.

Other additions to the content of the PMBOK® Guide include:

  • More detailed information on the PMI Talent Triangle and the skills that are essential to being successful in today’s market.
  • Greater emphasis will be given to strategic and business knowledge. Related project management business documents will also be given greater attention and discussion.

Structural Changes

With the sixth edition, PMI has made a significant adjustment to the structure of the PMBOK® Guide in that it will be organized by process group and not by knowledge area. This is good news for those who will be studying to become PMPs as well as practicing project managers. The knowledge areas of project management will now be presented in the PMBOK® Guide in the manner they are actually tested on the exam and as they are accomplished in real world practice.

In the current edition of the PMBOK® Guide, the first three chapters serve as introductory material and general information about the five process groups and ten knowledge areas. In the sixth edition, these first three chapters will be combined into two chapters, and chapter three will contain information regarding “The Role of the Project Manager.” In this chapter, the varying aspects of the project manager’s role will be associated with their corresponding areas of the PMI Talent Triangle.

A new area of emphasis will be given to differentiating between the processes which are “ongoing” (meaning those that execute on a continual basis) versus those which are “non-ongoing” (those which execute on a singular basis). Additional emphasis will also be given to the concept of “Project Scope” versus “Product Scope.” The area of “Earned Value Management” will see the addition of “Earned Schedule Management.”

Attention will be given to distinguish between “communications” and the communication that exist between people. Communication between people will be referred to as “communication” (singular), and the exchange of email, text, and other related documents will be referred to as “communications” (plural).

Knowledge Area of Risk Management

Regarding the Knowledge Area of Risk Management, PMI has developed a new strategy, “Escalate Responses.” This describes a situation in which the project manager escalates a risk to the appropriate party, and by doing so, the risk is no longer their responsibility. Upon escalation, the project manager has the option of:

1) Removing the risk from the risk register of the project, or

2) Maintaining it in the risk register with a new classification of “Escalated/Assigned To.”

A new “Lessons Learned Register” has been added. Instead of conducting a lessons learned meeting at the end of the project, project managers will be encouraged to update the register on a frequent basis such as at the completion of major or significant phases, milestones, and events instead of only at the end of the project.

Processes and Knowledge Areas

PMI’s project framework will still contain five process groups and 10 knowledge areas but will embrace 49 processes, expanded from the current 47. Additionally, two of the current Knowledge Areas have been renamed:

  • “Project Time Management” has been renamed “Project Schedule Management.”
  • “Project Human Resource Management” has been renamed “Project Resource Management.”

pmbok-table-1

The process “Close Procurements” has been deleted. Its functionality has been consolidated into the “Close Project or Phase” process. Three new processes will be added:

  • From the Executing Process Group, (section 4) are process 4.2, “Manage Project Knowledge” and process 4.8, “Implement Risk Responses.”
  • From the Monitoring and Controlling process group (section 5), is process 5.8 “Control Resources.”

Some of the names of current processes will also change.

  • “Perform Quality Assurance” will change to “Manage Quality.”
  • “Plan Human Resource Management” will change to “Plan Resource Management.”
  • “Acquire Project Team” will change to “Acquire Resources.”
  • “Control Communications” will change to “Monitor Communications.”
  • “Control Risks” to “Monitor Risks.”
  • “Plan Stakeholder Management” will change to “Plan Stakeholder Engagement.”
  • “Control Stakeholder Engagement” will change to “Monitor Stakeholder Engagement.”

pmbok-table-2
The inputs and outputs in the ITTO table will be somewhat simplified. The Tools and Techniques will additionally be grouped into common headings:

  • “Project Management Plan Components”
  • “Project Documents”

The various components of the project management plan that are currently listed as inputs to a process and/or those that become updated as outputs from a process will no longer be listed individually as inputs or outputs. Instead, the more generic “Project Management Plan” will be the input and “Project Management Plan Updates” will be the related output. Underneath the list of inputs and outputs will be a list of potential project management plan components. The components in a particular list will be dependent upon the needs of the project.

Several new appendices will also be added. These include:

  • Summary of Key Concepts
  • Summary of Tailoring Considerations
  • Summary of Tools and Techniques
  • Adaptive and Iterative Approaches

When to Schedule Your Exam

If you are taking your exam before January 1, 2018, you should continue to study from and prepare for your examination using the Fifth Edition of the PMBOK® Guide along with the examination content outline PMI makes available. If you are taking your exam after January 1, 2018, you should plan to study from and prepare for your examination using the Sixth Edition of the PMBOK® Guide.

Looking for tips on how to prep for PMP exam? We have you covered there too!

About the Author
Tim McClintock is a speaker, business consultant and certified project management professional (PMP®) who specializes in both strategic business planning and development as well as tactical management practices across several sectors including corporate clients, governmental agencies, and non-profit organizations. His articles and white papers have appeared in publications such as Business Week, Tech Republic, and The Modern Analyst. He has worked with clients such as Cisco, Intel, Deloitte & Touche, Booz Allen Hamilton, Verizon, Citigroup, Lockheed Martin, Exxon Mobil, MetLife, Sabre, the cities of Chicago, Los Angeles, and Palo Alto, National Aeronautics and Space Administration (NASA), National Security Administration (NSA), Department of Defense Information Systems Agency (DISA), Lawrence Livermore National Laboratory, General Dynamics, National Institutes of Health (NIH), MITRE Corporation, and the United States Military.



from
CERTIVIEW